The document provides an introduction to cloud computing, discussing key concepts, common mistakes, and Newesis' experience with cloud adoption. Specifically:
- It defines cloud computing as a new type of service with capabilities beyond remote hosting, and discusses new use cases enabled by different cloud technologies.
- It outlines common mistakes like assuming costs will always be higher, capacity is infinite, or that availability and security are handled automatically.
- It shares Newesis' journey working with various cloud vendors since 2009 and why they recommend a multi-cloud approach to avoid lock-in and select the best cloud for a given need.
- Finally, it presents Newesis' "Cloud Cookbook" approach of transforming systems for the
Anton Grishko "Multi-cloud with Google Anthos, Kubernetes and Istio. How to s...Fwdays
The focus of the interactive demonstration includes:
- Designing infrastructure for services in GCP and an on-premise data center
- Setup of environments using Google Kubernetes Engine and GKE On-Prem
- Configuration of Istio on GKE and launch of the demo application
- A demonstration of A/B testing with a vote for the final production design of the hybrid cross-environment application.
Target audience: CTOs, Architects, DevOps/System administrators.
This document discusses Nuxeo's capabilities for cloud-level scalability. It explains how Nuxeo leverages cloud infrastructure like AWS to allow applications to scale on demand and distribute components across multiple servers. Nuxeo uses technologies like Elasticsearch, Redis, and SQL databases to build distributed architectures that can scale workloads efficiently. The document also introduces Nuxeo.io, a platform that provides an end-to-end application factory service by deploying and managing Nuxeo applications in the cloud behind a common infrastructure.
Open stack in action enovance - cloudwatt - european ambitions for openstackeNovance
The document discusses the European company Cloudwatt and its ambitions to become a leader in Infrastructure as a Service using OpenStack. Cloudwatt has over 200 customers, contributes significantly to OpenStack code and integration, and received 225 million Euro in funding to compete globally. OpenStack allows Cloudwatt to massively scale, achieve aggressive pricing, and ensure interoperability. Cloudwatt aims to be part of the OpenStack ecosystem, contribute to open source, and ensure security, transparency, and service level agreements for customers.
Serverless computing allows running applications without managing infrastructure. Google Cloud Platform offers serverless options like Cloud Functions, Cloud Run, and App Engine. Common serverless patterns include publish-subscribe using PubSub, triggering functions from events, and data pipelines with Dataflow. Serverless applications are built using containers, functions, and fully managed services to focus on code and reduce operational overhead.
VMware Cloud on AWS allows customers to run VMware workloads on AWS infrastructure providing operational consistency, existing skillsets and tools, and control and security. It introduces VMware's software-defined data center (SDDC) technologies like vSphere, vSAN, and NSX running on AWS. This provides enterprises hybrid cloud capabilities with elasticity, portability of applications between on-premises and cloud, and access to AWS native services. Customers can easily deploy and manage their VMware environments on AWS.
Connecting VMware Cloud on AWS to Native AWS Services - UKVMUG 2018Julian Wood
Going through the options on connecting VMware Cloud on AWS with the myriad AWS services including RDS, ALB, S3, AWS Outposts, RDS on vSphere and CloudFoundation for EC2
This talk will give an introduction on how to use Terraform to deploy CloudStack infrastructure (VMs, Networks, Storage, etc.) using the Terraform cloudstack modules.
Anton Grishko "Multi-cloud with Google Anthos, Kubernetes and Istio. How to s...Fwdays
The focus of the interactive demonstration includes:
- Designing infrastructure for services in GCP and an on-premise data center
- Setup of environments using Google Kubernetes Engine and GKE On-Prem
- Configuration of Istio on GKE and launch of the demo application
- A demonstration of A/B testing with a vote for the final production design of the hybrid cross-environment application.
Target audience: CTOs, Architects, DevOps/System administrators.
This document discusses Nuxeo's capabilities for cloud-level scalability. It explains how Nuxeo leverages cloud infrastructure like AWS to allow applications to scale on demand and distribute components across multiple servers. Nuxeo uses technologies like Elasticsearch, Redis, and SQL databases to build distributed architectures that can scale workloads efficiently. The document also introduces Nuxeo.io, a platform that provides an end-to-end application factory service by deploying and managing Nuxeo applications in the cloud behind a common infrastructure.
Open stack in action enovance - cloudwatt - european ambitions for openstackeNovance
The document discusses the European company Cloudwatt and its ambitions to become a leader in Infrastructure as a Service using OpenStack. Cloudwatt has over 200 customers, contributes significantly to OpenStack code and integration, and received 225 million Euro in funding to compete globally. OpenStack allows Cloudwatt to massively scale, achieve aggressive pricing, and ensure interoperability. Cloudwatt aims to be part of the OpenStack ecosystem, contribute to open source, and ensure security, transparency, and service level agreements for customers.
Serverless computing allows running applications without managing infrastructure. Google Cloud Platform offers serverless options like Cloud Functions, Cloud Run, and App Engine. Common serverless patterns include publish-subscribe using PubSub, triggering functions from events, and data pipelines with Dataflow. Serverless applications are built using containers, functions, and fully managed services to focus on code and reduce operational overhead.
VMware Cloud on AWS allows customers to run VMware workloads on AWS infrastructure providing operational consistency, existing skillsets and tools, and control and security. It introduces VMware's software-defined data center (SDDC) technologies like vSphere, vSAN, and NSX running on AWS. This provides enterprises hybrid cloud capabilities with elasticity, portability of applications between on-premises and cloud, and access to AWS native services. Customers can easily deploy and manage their VMware environments on AWS.
Connecting VMware Cloud on AWS to Native AWS Services - UKVMUG 2018Julian Wood
Going through the options on connecting VMware Cloud on AWS with the myriad AWS services including RDS, ALB, S3, AWS Outposts, RDS on vSphere and CloudFoundation for EC2
This talk will give an introduction on how to use Terraform to deploy CloudStack infrastructure (VMs, Networks, Storage, etc.) using the Terraform cloudstack modules.
The document discusses where to start when considering moving applications and systems to the cloud. It suggests starting with applications that have lower data transfer requirements and pain points for cloud delivery, such as web serving, virtual desktops, and application development and testing. It provides examples of common cloud architectures and advises using a "start here" approach to initially move applications such as email, collaboration tools, and eMeetings to software as a service (SaaS) cloud options to realize cost savings. The document encourages organizations to start moving to the cloud today.
WSO2 Cloud Platform allows users to purchase computations, storage, and services on demand. It provides infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) models in both public and private cloud deployments. The platform is multi-tenant and auto-scales resources elastically based on load. It aims to simplify deploying and managing applications and services in the cloud with tools for development, identity management, and governance.
RackN is a software company based in Austin, TX that provides a unified operational control platform for hybrid cloud and infrastructure. Their platform aims to help operations teams improve productivity and automate lifecycle management of complex technology stacks at scale across multiple platforms like Mesos, Kubernetes, OpenStack, and tools like Terraform. RackN uses intelligent template-based workflows to compose and simplify operations across physical, cloud and platform infrastructures and APIs.
In deze Turbo Training Hybrid Cloud (zoals gegeven op Storage Expo | InfoSecurity 2015) volgen we een 7-stappenplan om te komen tot een hybrid cloud. We behandelen de juridische kant, de selectiefase, het configureren en verbinden van de clouds door middel van een Cloud Management Portal.
Managing Ceph operational complexity with JujuShapeBlue
James Page presented on using Juju and charms to manage the operational complexity of Ceph deployments. Juju provides an auto-magic deployment tool and model-driven operations that can be used to deploy Ceph along with related applications like rbd-mirror across multiple data centers. The Ceph charms encapsulate operational knowledge to handle tasks like installation, configuration, upgrades, scaling, and health monitoring. Juju allows defining the application model and relating applications across models, and includes features like MAAS for server provisioning and LXD for containers. Demostrations showed using Juju actions to manage Ceph operations like creating pools, refreshing mirrors, and upgrading versions across availability zones.
AWS re:Invent re:Cap 행사에서 발표된 강연 자료입니다. 아마존 웹서비스의 이종남 프로페셔널 컨설턴트가 발표한 내용입니다.
내용 요약: AWS 클라우드 인프라 활용의 이점을 극대화하기 위해 취해야 할 최적화 방안과 아키텍처 설계 방법에 대해 알아보겠습니다. AWS의 성능 최적화 전문가에게서 모범 사례를 습득하고 인프라 확장 과정에서 최적의 성능을 확보하려면 어떤 서비스를 어떻게 활용해야 하는지 알아보시기 바랍니다.
IBM Cloud Paris Meetup - 20180628 - Rex on ODM on CloudIBM France Lab
This document discusses deploying IBM Operational Decision Manager (ODM) on Kubernetes. It provides a brief history of moving ODM from on-premise to Docker and Kubernetes. It discusses tips for building Docker images for ODM and using Helm charts to deploy ODM on Kubernetes. Finally, it discusses deploying ODM on IBM Cloud Private using Docker images and Helm charts to provide a production-ready deployment of ODM on Kubernetes.
AWS Summit Singapore - Protecting AWS and Hybrid Workloads with Veeam and N2WS.Amazon Web Services
Anthony Spiteri, Global Technologist, Product Strategy, Veeam.
Alexander Thomson, Sales Director EMEA & APAC, N2WS.
Veeam has pioneered the market of Availability for the Always On Enterprise by helping enterprises meet recovery time and point objectives (RTPO) of less than 15 minutes on any cloud or hybrid platform. Veeam recently acquired N2WS, a leading provider of cloud native backup and DR solutions providing backup automation and instant recovery for AWS workloads. Come and hear how N2WS is leading the backup and recovery of EC2 instances and native AWS workloads, how Veeam VTL technology leveraging the AWS Storage Gateway offers offsite cloud repositories as well as how Veeam is offering leading availability solutions for VMware Cloud on AWS.
Anthos Security: modernize your security posture for cloud native applicationsGreg Castle
In this talk we describe a high-level workflow for securing Kubernetes clusters across GKE, Anthos on AWS, and Anthos On-Prem. There's a lot to cover: about 30 products and features across 3 platforms!
Prevention is better than cure. Learn 3 stages of AWS optimization.
1) Arrest Cloud Leakage
2) Implement Continuous Optimization
3) Explore cost-effective cloud options
ActOnMagic empowers cloud-first and cloud-only companies to Utilise any cloud services efficiently and securely without fear and losing freedom. Visit www.actonmagic.com
Cloudureka: Cloud IaaS Discovery (CID) Platform
Essential Tool kit for every cloud engineer
Search and Compare Any Cloud or Multi- Cloud to measure ROI
ActOnCloud: Intelligent Cloud Essentials (ICE) Platform *
Manage, Optimize and Provision Any Cloud or Multi-Cloud
This document discusses cloud computing, including definitions, benefits, risks, and terminology. It begins by clearing up common misconceptions about cloud computing. The main benefits cited are reduced costs through no upfront investment, ability to grow and shrink resources as needed, and offloading management of data storage and sharing. Risks discussed include issues around job security, migration between providers, availability of services, data security and privacy, and integration with existing systems. The document emphasizes that cloud computing can complement existing infrastructure and is not going away as a technology.
This document discusses the benefits and considerations of using cloud computing. Some key benefits include flexibility, speed, pay-as-you-go costs, and removing the need for specialized resources. However, there are also security, regulation, and infrastructure dependency concerns. It emphasizes that a hybrid cloud model combining internal and external systems provides the best solution. Integration between different cloud resources and internal systems requires tools like a cloud broker and XML gateway. Case studies demonstrate how the cloud can enable scaling, content distribution, and cost-effective marketing campaigns.
Introduction to Cloud Computing with Amazon Web Services and Customer Case StudyAmazon Web Services
Join this workshop to understand the core concepts of “Cloud Computing” and how businesses around the world are running the infrastructure that supports their websites to lower costs, improve time-to-market, and enable rapid scalability matching resource to demands of users. Whether you are an enterprise looking for IT innovation, agility and resiliency or small and medium business who wants to accelerate growth without a big upfront investment in cash or time for technology, the AWS Cloud provides a complete set of services at zero upfront costs which are available with a few clicks and within minutes.
Cloud Velocity provides hybrid cloud software which can migrate your existing applications into the public cloud with no application modification needed and with a high level of security and control in the cloud.
Presented at Interop, Mumbai -2009.
Due to its unique model, clouds are being loved by techies and businessmen alike. However, clouds also present unique challenges such as dependency on Internet connectivity, vendor lock-in, vendor infrastructure as a single point of failure as well as lack of control over the SaaS software release schedule. That said, each of these challenges can be effectively tackled. For instance, Internet connectivity challenges can be addressed by means of caching devices, while "Cloud Virtualizers" can help in addressing vendor lock-in and vendor infrastructure as a single point of failure. The goal of this session is to make the audience aware of the opportunities, while effectively tackling cloud-related challenges.
The document discusses where to start when considering moving applications and systems to the cloud. It suggests starting with applications that have lower data transfer requirements and pain points for cloud delivery, such as web serving, virtual desktops, and application development and testing. It provides examples of common cloud architectures and advises using a "start here" approach to initially move applications such as email, collaboration tools, and eMeetings to software as a service (SaaS) cloud options to realize cost savings. The document encourages organizations to start moving to the cloud today.
WSO2 Cloud Platform allows users to purchase computations, storage, and services on demand. It provides infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) models in both public and private cloud deployments. The platform is multi-tenant and auto-scales resources elastically based on load. It aims to simplify deploying and managing applications and services in the cloud with tools for development, identity management, and governance.
RackN is a software company based in Austin, TX that provides a unified operational control platform for hybrid cloud and infrastructure. Their platform aims to help operations teams improve productivity and automate lifecycle management of complex technology stacks at scale across multiple platforms like Mesos, Kubernetes, OpenStack, and tools like Terraform. RackN uses intelligent template-based workflows to compose and simplify operations across physical, cloud and platform infrastructures and APIs.
In deze Turbo Training Hybrid Cloud (zoals gegeven op Storage Expo | InfoSecurity 2015) volgen we een 7-stappenplan om te komen tot een hybrid cloud. We behandelen de juridische kant, de selectiefase, het configureren en verbinden van de clouds door middel van een Cloud Management Portal.
Managing Ceph operational complexity with JujuShapeBlue
James Page presented on using Juju and charms to manage the operational complexity of Ceph deployments. Juju provides an auto-magic deployment tool and model-driven operations that can be used to deploy Ceph along with related applications like rbd-mirror across multiple data centers. The Ceph charms encapsulate operational knowledge to handle tasks like installation, configuration, upgrades, scaling, and health monitoring. Juju allows defining the application model and relating applications across models, and includes features like MAAS for server provisioning and LXD for containers. Demostrations showed using Juju actions to manage Ceph operations like creating pools, refreshing mirrors, and upgrading versions across availability zones.
AWS re:Invent re:Cap 행사에서 발표된 강연 자료입니다. 아마존 웹서비스의 이종남 프로페셔널 컨설턴트가 발표한 내용입니다.
내용 요약: AWS 클라우드 인프라 활용의 이점을 극대화하기 위해 취해야 할 최적화 방안과 아키텍처 설계 방법에 대해 알아보겠습니다. AWS의 성능 최적화 전문가에게서 모범 사례를 습득하고 인프라 확장 과정에서 최적의 성능을 확보하려면 어떤 서비스를 어떻게 활용해야 하는지 알아보시기 바랍니다.
IBM Cloud Paris Meetup - 20180628 - Rex on ODM on CloudIBM France Lab
This document discusses deploying IBM Operational Decision Manager (ODM) on Kubernetes. It provides a brief history of moving ODM from on-premise to Docker and Kubernetes. It discusses tips for building Docker images for ODM and using Helm charts to deploy ODM on Kubernetes. Finally, it discusses deploying ODM on IBM Cloud Private using Docker images and Helm charts to provide a production-ready deployment of ODM on Kubernetes.
AWS Summit Singapore - Protecting AWS and Hybrid Workloads with Veeam and N2WS.Amazon Web Services
Anthony Spiteri, Global Technologist, Product Strategy, Veeam.
Alexander Thomson, Sales Director EMEA & APAC, N2WS.
Veeam has pioneered the market of Availability for the Always On Enterprise by helping enterprises meet recovery time and point objectives (RTPO) of less than 15 minutes on any cloud or hybrid platform. Veeam recently acquired N2WS, a leading provider of cloud native backup and DR solutions providing backup automation and instant recovery for AWS workloads. Come and hear how N2WS is leading the backup and recovery of EC2 instances and native AWS workloads, how Veeam VTL technology leveraging the AWS Storage Gateway offers offsite cloud repositories as well as how Veeam is offering leading availability solutions for VMware Cloud on AWS.
Anthos Security: modernize your security posture for cloud native applicationsGreg Castle
In this talk we describe a high-level workflow for securing Kubernetes clusters across GKE, Anthos on AWS, and Anthos On-Prem. There's a lot to cover: about 30 products and features across 3 platforms!
Prevention is better than cure. Learn 3 stages of AWS optimization.
1) Arrest Cloud Leakage
2) Implement Continuous Optimization
3) Explore cost-effective cloud options
ActOnMagic empowers cloud-first and cloud-only companies to Utilise any cloud services efficiently and securely without fear and losing freedom. Visit www.actonmagic.com
Cloudureka: Cloud IaaS Discovery (CID) Platform
Essential Tool kit for every cloud engineer
Search and Compare Any Cloud or Multi- Cloud to measure ROI
ActOnCloud: Intelligent Cloud Essentials (ICE) Platform *
Manage, Optimize and Provision Any Cloud or Multi-Cloud
This document discusses cloud computing, including definitions, benefits, risks, and terminology. It begins by clearing up common misconceptions about cloud computing. The main benefits cited are reduced costs through no upfront investment, ability to grow and shrink resources as needed, and offloading management of data storage and sharing. Risks discussed include issues around job security, migration between providers, availability of services, data security and privacy, and integration with existing systems. The document emphasizes that cloud computing can complement existing infrastructure and is not going away as a technology.
This document discusses the benefits and considerations of using cloud computing. Some key benefits include flexibility, speed, pay-as-you-go costs, and removing the need for specialized resources. However, there are also security, regulation, and infrastructure dependency concerns. It emphasizes that a hybrid cloud model combining internal and external systems provides the best solution. Integration between different cloud resources and internal systems requires tools like a cloud broker and XML gateway. Case studies demonstrate how the cloud can enable scaling, content distribution, and cost-effective marketing campaigns.
Introduction to Cloud Computing with Amazon Web Services and Customer Case StudyAmazon Web Services
Join this workshop to understand the core concepts of “Cloud Computing” and how businesses around the world are running the infrastructure that supports their websites to lower costs, improve time-to-market, and enable rapid scalability matching resource to demands of users. Whether you are an enterprise looking for IT innovation, agility and resiliency or small and medium business who wants to accelerate growth without a big upfront investment in cash or time for technology, the AWS Cloud provides a complete set of services at zero upfront costs which are available with a few clicks and within minutes.
Cloud Velocity provides hybrid cloud software which can migrate your existing applications into the public cloud with no application modification needed and with a high level of security and control in the cloud.
Presented at Interop, Mumbai -2009.
Due to its unique model, clouds are being loved by techies and businessmen alike. However, clouds also present unique challenges such as dependency on Internet connectivity, vendor lock-in, vendor infrastructure as a single point of failure as well as lack of control over the SaaS software release schedule. That said, each of these challenges can be effectively tackled. For instance, Internet connectivity challenges can be addressed by means of caching devices, while "Cloud Virtualizers" can help in addressing vendor lock-in and vendor infrastructure as a single point of failure. The goal of this session is to make the audience aware of the opportunities, while effectively tackling cloud-related challenges.
Dynamics Day '11 - The Cloud: What it means for DynamicsIntergen
This document discusses the cloud and its implications for Microsoft Dynamics. It begins by defining the cloud and describing the different cloud models. It then outlines the benefits of the cloud, including lower costs, reduced maintenance, and increased agility. Some challenges of moving to the cloud are also discussed, such as migration costs, security, and legal issues around data sovereignty. The document uses the example of MedRecruit, a recruitment company, to illustrate how different parts of their business have adopted cloud solutions like Office 365, Dynamics CRM Online, and hosted NAV. It addresses common concerns around security and reliability in the cloud.
AWS offers a variety of data migration services and tools to help you easily and rapidly move everything from gigabytes to petabytes of data. We can provide guidance and methodologies to help you find the right service or tool to fit your requirements, and we share examples of customers who have used these options in their cloud journey.
A perspective on cloud computing and enterprise saa s applicationsGeorge Milliken
A perspective on Cloud computing and SaaS for Enterprise applications by a SaaS industry veteran.
Please make sure you read the speakers notes, there's a significant amount of content there.
#IBM Open technology platforms, pre-integrated and pre-tested systems, and optimised configurations that´s
IBM Cloud Infrastructure Alliance especially designed to help you accelerate your journey to the Cloud. Contact me for more details. #ibmcloud
Cloud Computing Roadmap Public Vs Private Vs Hybrid And SaaS Vs PaaS Vs IaaS ...SlideTeam
Incorporate How Project Quality Is Managed PowerPoint Presentation Slides to determine how quality will be managed throughout by handling processes and procedures. Analyze the quality-related concerns of the firm by using this effective PPT slideshow. Showcase the information regarding the quality standards that are defined in order to manage overall quality by taking the assistance of the project quality management PowerPoint slideshow. Provide detailed information about product development, design, and testing with the help of a quality management plan PPT slideshow. Showcase various quality-related initiatives, product quality assurance checklist, etc by incorporating this PowerPoint slide deck. Highlight detail about various quality control initiatives, product quality control checklist, quality assurance, etc. by using project management PPT themes. Explain control log, quality control, and assurance issues reporting plan. You can also present information on the project inspection checklist. Present testing techniques that are used to evaluate materials, components properties, in order to determine defects and discontinuities by taking the assistance of project quality assurance PowerPoint slides. The project quality PPT also allows you to present key quality management tools, weekly quality defect occurrence with check sheet, etc. https://bit.ly/3gpFPdy
2014년 10월 29일에 열린 AWS Enterprise Summit에서의 발표자료입니다. 아마존 웹서비스의 APAC Principal Technology Evangelist인 Markku Lepisto가 진행한 강연입니다.
강연 요약: 클라우드 컴퓨팅은 기업들의 IT 서비스 소비와 생산 방식을 빠르게 변화시키고 있습니다. 일반적으로 큰 규모의 기업들은 클라우드 컴퓨팅의 가치를 잘 인식하고 있지만, 자신들의 사업에 적합하게 클라우드 컴퓨팅을 평가하고 도입할 방법에 대해서는 확실히 파악하고 있지 못한 기업들이 많습니다. 이 세션에서는 클라우드 도입의 여러가지 전략과 각 단계에 대해 살펴보도록 하겠습니다.
This document discusses the concept of cloud computing and its implications for businesses. It begins with definitions of cloud computing and discusses various cloud service models (infrastructure as a service, platform as a service, software as a service) and deployment models (private cloud, public cloud, hybrid cloud). It then addresses how cloud computing provides opportunities for resellers to offer new services while some users still have concerns about security and reliability. The cloud market is growing rapidly but still makes up a small percentage of overall IT spending currently.
We are talking about Cloud adoption challenges and cloud failes. Like the AWS re:Invent event also talk about cost management, visibility and Governance. We pick one solution CliQr.com to show how to avoid obstacles and manage hybrid clouds as company. #hybridcloudsuccessful
The document discusses the risks and rewards of big data in the cloud. It provides an overview of cloud computing categories including infrastructure as a service, platform as a service, and applications as a service. Benefits of cloud computing include flexibility, scalability, no upfront costs, and pay-per-use models. Risks include concerns over security and control of data and systems in the cloud.
Cloud computing allows applications and services to be delivered over the internet through virtualized infrastructure. It provides scalable resources, self-service access, and pay-as-you-go pricing. While cloud computing offers potential benefits, there is still confusion around its definition. Oracle's strategy is to provide enabling technologies for both private and public cloud deployments, giving customers choice while ensuring security and enterprise-grade capabilities. Architects should assess existing systems to identify good candidates for cloud and plan a gradual evolution that partitions workloads and shifts resources to shared services.
One of the most over-used terms in technology today, the “Cloud” is being used to describe pretty much any service that works over the Internet. But cloud computing has some specific advantages and some specific concerns. There are also three main areas where cloud computing is making a lot of business sense: in running business applications, in provide storage services, and in providing an alternative to computer servers.
In this presentation, I will better define what the cloud is and isn’t and then explore the areas where cloud services are providing value. I also give you tips on evaluating future cloud service providers so that you can continue to understand this new computing paradigm.
The document discusses the key considerations for migrating an enterprise's systems and applications to the cloud. It outlines 6 main steps: 1) Consider the data and applications, 2) Evaluate costs, 3) Define a cloud migration strategy, 4) Choose the appropriate cloud model (public, private, hybrid), 5) Rethink governance and security strategies, and 6) Prepare for potential challenges during migration. The company Silverlining is positioned as providing consulting services to help enterprises navigate this process and realize the benefits of cloud computing.
This slide deck was presented at #DataOnCloud event New York. DataOnCloud is an invite-only event for CIOs and top IT innovators. DataOnCloud enables key decision makers to discuss about real life adoption scenarios, challenges and best practices for leveraging Big, Small and Line Of Business Data on Cloud.
Aditi Technologies, a 'cloud first' technology services company organized #DataOnCloud, an event series focused on orchestrating data on cloud and navigating the complexity around integration, security, platform selection and technology solutions.
Aditi Technologies partnered with Microsoft for this 2-hour, CXO roundtable event in global technology hubs - London, New York, Seattle and San Diego
This document discusses how cloud computing can accelerate innovation and drive new business models for enterprises. It notes that private and public cloud models can optimize capital expenditures, lower operating expenses, improve uptime and service delivery times. The transition to cloud often requires changes to roles, skills, processes and organizational structure. Key benefits of cloud include lower total cost of ownership, increased speed and agility, operational simplicity, and ability to easily scale. A hybrid cloud model combining on-premise and off-premise infrastructure can deliver benefits like cost reduction, revenue growth, strategic budget allocation, and faster provisioning times.
This document discusses how cloud computing can accelerate innovation and drive new business models for enterprises. It notes that private and public cloud models can optimize capital expenditures, lower operating expenses, improve uptime and service delivery times. The transition to cloud often requires changes to roles, skills, processes and organizational structure. Key benefits of cloud include lower total cost of ownership, increased speed and agility, operational simplicity, and ability to easily scale. A hybrid cloud model combining on-premise and off-premise resources can deliver benefits like cost reduction, revenue growth, strategic budget allocation, and faster provisioning times.
Similar to Newesis - Introduction to the Cloud (20)
Introduction to Microsoft Azure Well Architected Framework in Italian - Session 6 of 6
Introduzione a Microsoft Azure Well Architected Framework in Italiano - Sessione 6 di 6
Modulo 6: efficienza delle prestazioni
Introduction to Microsoft Azure Well Architected Framework in Italian - Session 5 of 6
Introduzione a Microsoft Azure Well Architected Framework in Italiano - Sessione 5 di 6
Modulo 5: eccellenza operativa
Introduction to Microsoft Azure Well Architected Framework in Italian - Session 4 of 6
Introduzione a Microsoft Azure Well Architected Framework in Italiano - Sessione 4 di 6
Modulo 4: ottimizzazione dei costi
Introduction to Microsoft Azure Well Architected Framework in Italian - Session 3 of 6
Introduzione a Microsoft Azure Well Architected Framework in Italiano - Sessione 3 di 6
Modulo 3: sicurezza
Introduction to Microsoft Azure Well Architected Framework in Italian - Session 2 of 6
Introduzione a Microsoft Azure Well Architected Framework in Italiano - Sessione 2 di 6
Modulo 2: affidabilità
Introduction to Microsoft Azure Well Architected Framework in Italian - Session 1 of 6
Introduzione a Microsoft Azure Well Architected Framework in Italiano - Sessione 1 di 6
Modulo 1: introduzione, principi e concetti base
Terraform and Infrastructure as Code (IaC): an introduction of the reason why this kind of solution had been created and an explanation of the concepts and usage, with a link in the notes to a demo project available in GitHub.
Kubernetes the deltatre way the basics - introduction to containers and orc...Rauno De Pasquale
The basics - Introduction to Containers and Orchestrators (May 18th, 2020)
by Rauno De Pasquale (Newesis), supported by Cristiano Degiorgis (Deltatre)
A new version of the introduction to containers and orchestrator, done for the series of events "Kubernetes - The Deltatre way".
Knowing the context and concepts behind container use is essential to be able to proceed on the path that will lead to master Kubernetes and Cloud Native applications. This initial session is about basic skills to answer questions such as: what is a container image? Why did anyone feel the need for an orchestrator? Are there any alternatives to Docker and Kubernetes? How does working with containers and Kubernetes connect to traditional virtualization? The session aims to provide the basic skills to be able to guide yourself in the next sessions where the ways of creating and execution of applications in Kubernetes environment will be tackled.
Recorded session: YouTube | Facebook
Repository: https://github.com/deltatrelabs/community-events-kubernetes-the-deltatre-way
DevOps Torino Meetup - DevOps Engineer, a role that does not exist but is muc...Rauno De Pasquale
The third appointment of the DevOps Meetup in Turin. We made a survey to collect data to discuss about the usage of the term "DevOps Engineer" to define a specific role. Is it really a role? And how this role compare with the ones of SysAdmin, Cloud Engineer, SRE or Developer? Which are the different organisation model used for each of these roles? Which are the skills and area of competences?
Independently from the DevOps movement but starting from the same problems, Google developed its own strategy defining a new specific role called SRE (Site Reliability Engineer). This introduction tries to explain the history and the concept of this methodology and to compare it with the DevOps manifesto to understand what does it mean to adopt DevOps and what does it mean to be an SRE and what the two things are sharing and where they diverge.
DevOps Torino Meetup Group Kickoff Meeting - Why a meetup group on DevOps, wh...Rauno De Pasquale
Torino DevOps Meetup Group - Culture, Processes and Tools.
There is a lot of talking about DevOps culture and practices with different point of views and a lot of misunderstandings. This group aims to create a point of discussion to share experience, analysis and thoughts to help each us to better understand and implement DevOps approaches into our way of working in the Digital Services.
Si parla molto di DevOps ma rimane molta confusione circa il significato del termine, ci sono molti punti di vista diversi e anche diversi fraintendimenti. Questo gruppo si prefigge lo scopo di diventare un punto di aggregazione per condividere esperienze, studi e pensieri circa la cultura e le pratiche DevOps per poter giungere insieme a una migliore comprensione che ci possa aiutare a portare questo approccio nel nostro lavoro in ambito IT.
This document provides an introduction to containers and container orchestration technologies. It discusses the evolution from virtual machines to containers and the benefits of containers. It then explains why an orchestrator tool is needed to manage containers at scale. The remainder of the document defines common container and orchestration concepts, including Docker, Kubernetes objects and components, Helm for package management, and Istio for traffic management and security.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
6. Cloud is not just a remote hosting
Cloud is a new kind of service
With new capabilities
7. Common mistakes
Cloud false myths
Costs Evaluation
Cloud is more expensive
Cloud requires more work
Technical choices
Migrate from On Premises to Cloud vs Transform from On Premises to Cloud
Capacity and Scalability is not infinite and not fully automated
Availability
Clouds have incidents
No longer a single SLA to refer to
Data and Security Management
Data backup vs Data replication (and both needs to be explicitly selected)
Control access to resources (and monitoring and alerting is not active and configured by default)
10. Common mistakes
Cloud false myths
Technical choices
Migrate from On Premises to Cloud vs Transform from On Premises to
Cloud
Capacity and Scalability is not infinite and not fully automated
12. Common mistakes
Cloud false myths
Data and Security Management
Data backup vs Data replication (and both needs to be explicitly
selected)
Control access to resources (and monitoring and alerting is not active
and configured by default)
14. Newesis Team and Cloud hosting
A long journey
2009 Build of private hybrid cloud (Rackspace)
2010 Data storage and API (AWS)
2011 Virtual Machines provisioning, DNS Service (AWS)
2012 Video processing and delivery (Azure)
2014 Video processing and delivery (AWS)
2016 Virtual Machines provisioning, Network Peering, Web Application, data storage, API (Azure)
2017 Virtual Machines provisioning, Private Networks, Web Application and Containers (Alibaba)
2018 Virtual Machines provisioning, Private Networks, Web Application and Kubernetes (Google Cloud Platform)
2018 Data Lake and Data Analytics (AWS)
38. Our approach at Newesis
Business driven technology choices
by small Independent
Multidisciplinary Teams
Before creating check if you can just
use but do not assume it is complete
out of the box
Think Reliability and Security
Constantly Measure, Observe and
Adapt
We all are using cloud services since years now, this session does not pretend to teach anything technical, there are many cloud services, even assuming I have the skills to teach you what they are and how to operate each of them (and the assumption would be wrong), we would need to stay in this room for months to complete such an exercise and once completed we would discover that what we’ve learned is old and new services have been created in the meantime.
Today what we want to achieve is to have a common understanding of what does it mean to work with the Cloud, how this change the paradigms we were following before, how this change the definition of our roles and interactions.
This session aims to give a common context and approach when it comes to consider Cloud services.
We used to have candles to provide light in the houses, this has been replaced by electricity powering light bulb, moving from candles to light bulb is not cost effective, candles are really inexpensive and only dependency to use them is to be able to have an initial flame, light bulb are more expensive than candles and require electrical plants and a subscription to a provider bringing electricity up to your house. But with light bulb the operations to have lights is much easier and it is much easier to vary how much light you want in a certain room compared to one other, and you can easily have light in multiple areas of the house coming up together and the electrical plant can be used for many other things than not just provide light.
When we all moved from legacy mobile phones to smart phones we got a product that was seriously more expensive, that was bigger, harder to fit into a pocket, often with worst antennas compared to the phones we were using before and with batteries that were lasting incredibly less (we used not to charge the legacy phones for days, with smart phones this became hours). But we all moved from legacy phones to smart phones, because it was enabling us to do much more, to become more efficient, we used to travel with a phone, an agenda, a computer, a camera, a navigator system, a music player. All these things entered into a single product capable to do all they were doing and something more. But it means we had to start using the smart phone on a way that is pretty different from the way we were using the legacy phones.
The Cloud providers have very good marketing people and marketing had been very good on creating attention to the Cloud even when the concept was very far from what people was used to. Marketing communicates for simple messages, people tend to absorb them quickly and so this caused the creation of a series of false myths that is important to demystify.
The most common mistake is to believe Cloud will cost less, I can use it without any problem because it is very inexpensive. This is not true, if you take the same computational power in an On Premises setup and in the Cloud (any of the major Cloud providers) and you compare costs, the On Premises will always cost less. This because you are comparing two different things and it is easy to understand with an example. If you make your own pizza you buy some flour, yeast, salt, oil, tomato sauce and you do the job using your own oven with your own electricity. This will cost you around 2 Euro per pizza. If you go and order a pizza in one of the best Pizzeria in town, the same pizza will cost you 5 euro.
But if you need to do 100 pizzas at home this would be incredibly slow and difficult or very expensive because you will have buy additional ovens and tables and space, while in the pizzeria it will always cost 5 euro or probably less because they will give you a volume discount. You could decide to buy or build your own big Pizza Machine to be able to prepare 100 pizzas very quickly and in the long term this will cost you less than not keep ordering from the Pizzeria, but what if you discover that you need to make 50 Pizzas only but add 50 Lasagna? Your investment on the big Pizza Machine will be unused and you’ll need a big Lasagna machine too. In the pizzeria any pizza wills keep costing you 5 euro and you will be able to go and ask to have also Lasagna or Spaghetti and change your mind in any moment without the need for plan in advance.
Coming back to the list:
Cloud is not less expensive, it just has a different usage model. I am insisting a lot into this aspect because our experience so far has been an experience of wasting money but it is not only us, the most common result for companies migrating from traditional solutions to the Cloud is that they end up spending more money than in the past. It is dramatically important for everyone to remind that the Cloud has a “per usage” model and also that the Cloud has a very detailed list of explicit costs. When you create a Virtual Machine, the simpler example, you have to remind that you will pay a cost per hour based on the CPU size and Memory size of the VM but you will also in addition pay a cost on top for the disk storage used by the VM based on the size of the space allocated but also on the kind of disk (different models having different scalability, security, performance measures) but you will also pay a cost on top for the number of operations you make on the disk (that Is not included in the pure cost of the disk space availability) and you pay a cost on top for the network traffic generated by the VM and you pay a cost on top for any snapshot you take of the VM and if you enable some special agent or add-on it will also have a separate cost line. And, this is the most important thing, you pay if what you created exist, it does not matter if you are using it or not, you pay per the minutes it does exist so it makes a very big difference if you close a service now or in two hours and if you sized it appropriately for what you need or if you oversize it just to be sure and have contingency. The business model of the Cloud is strictly based on the concept that many people will over provision and will forget to turn off services when not in use.
The Cloud providers are constantly creating new services that can be more convenient to run the business cases of your projects than the ones you were using before, they are also constantly updating their API or UI, they also change the prices of existing services on the base of how the market moves and how capacity and capabilities are used in the datacentre.
The value of the Cloud is also on the ability to create multiple systems, isolated from each other in terms of deployment, so if you were used to have a cluster of 9 database servers to run all business, when moving in the Cloud you will find yourself having hundreds of them.
For all these reasons together, it is clear that the work required to manage and operate the Cloud is increased compared with the work that was required to manage a traditional On Prem solution, reason why the market average went from 1 FTE of a System Engineer every 15 FTE of Development to the current 1FTE of a Cloud Engineer (evolution of the System Engineer) every 5 FTE of Development.
Technical choices: as said before, if you take what you have On Premises and you move it to the Cloud as is preserving the same operational model, just don’t do it, it will just cost you much more. You have to move to the Cloud operating things as required by the Cloud (so continuous resizing, deletion and creation on the base of the actual needs).
One other common mistake is to believe that capacity and scalability, as backup and resiliency or security are available by default. Clouds are composed by hardware physically installed into physical datacentres. It means that each service in each region has its own pretty finite capacity. In most services you have to configure explicitly the capacity and scalability and you have to define and active the rules and you’ll find some limit and so you have to design your solution to be able to run from multiple regions.
The Clouds are very complex distributed systems, operated by humans and running on physical hardware. For this reason Clouds have incidents and Cloud have failures that some time are able to make an entire region unavailable for long hours.
This is common to each and every Cloud provider and can be easily verified looking at the incidents report of each of the main vendors. Azure for example had incidents in the following days: 8 of November, 2 of November, 27 of October, 24 of October, 17 of October, 16 of October, 13 of October, 11 of October, 8 of October, 4 of October, 3 of October, 2 of October. AWS has 40 incidents in the last 30 days.
When it comes to SLA, you have to pay attention that each single atomic service has its own independent SLA, is means that if your service depends on the availability of the DNS, a set of App Service, the network, a couple of Virtual Machines and a PaaS based database and the DNS is down and so your service is completely unavailable for your end users, the Cloud providers will pay you only the penalties related to different between the actual uptime of the DNS service and the SLA on that service and will pay only the credits related to the DNS services, without any penalty on the other components that you will keep paying completely without any reimbursement.
This, as per the capacity and scalability limitations, means that you have to design your solution to be capable of running into multiple regions and to design it in a way to cope with multiple kind of failures. Ideally, and in extreme, you should also design your solution to run using services from multiple different cloud providers.
Please also always remind that Cloud business model is based on the concept of overbooking and abstraction, therefore please consider that when the Cloud refer to 1CPU this is not equal to the capacity you get from 1CPU in your On Premises setup, same for the network throughput. It is also always possible that activities executed by other customers of the same Cloud provider and region will impact the capacity available for your deployment.
Cloud platforms have all that is needed to manage data replication, data backup, security, monitoring. All these features are available but none of them is activated by default. The deployment has to be designed in order to implement the required configurations for each of these areas.
It is also important to take a clear distinction between data backup and data replication. Often people confuse the two things and believe that to have a geographical redundant storage is all that they need to save their data. Data replication means that there are multiple distributed copies of the data, this is necessary to assure data availability, but if you need to assure data reliability you need also to implement a backup policy. When you replicate data you replicate every action done on data, replicating also data corruptions caused by bad editing or deletion.
Different mechanisms and levels of data replication and data backup policies are available in each of the Cloud service, it is important to define the business requirement for the specific solution and explicitly implement what is best for the case, taking into consideration the related costs.
One of the most common mistakes with Cloud provisioning is to forget about security. Thousands of kubernetes clusters, redis clusters, mongodb replicasets and other services are compromised every year because deployed into the Cloud using public templates without taking the time to configure restriction in network access or to change default usernames and password. As for the other elements we just discussed, services are available in the Cloud to correctly manage any security concern, but rarely they are activated and configured by default. While resources On Prem are by default available only on private office networks, access to services in public Cloud is operated via public internet, it means that when you open access you are potentially opening access to everyone. Almost every company had already experienced multiple services and Virtual Machines that had been configured as completely open to the Internet and in more than a couple of cases, cross multiple cloud providers, we had virtual machines completely hacked. One thing is certain, and you can easily check if you look at the logs of the ADSL router you have at home, any public IP is constantly subjected to scan and attacks, do not think “it has a public IP but none knows about it, so why should someone attack it”, the simple fact to have a public IP means that the service or server is under attack and will constantly be under attack.
Encrypt traffic and apply as much network access filtering as possible on any system or service you deploy in the Cloud, change any default username or password, always make a persona copy of any template you want to use and make your own changes, avoiding the direct usage of public images and templates and always select only more than trustable repositories.
Logging and Monitoring tools are also available on each Cloud vendor, again these are not configured by default, take your time to analyse the solution you have to deploy into the Cloud and properly configure logging collection, log analysis and monitoring tools, for both security and availability. This is a constant activity, monitoring has to be tuned constantly.
There has been a gradual approach, often tactical and driven by projects specific needs.
The usage of Cloud services require a new paradigm, it can not be approached with the same toolset and mindset we had while working On Premises or with traditional infrastructure.
Cloud adoption history is full of cases of big success but it is also full of cases of incredibly, and very expensive, failures.
As for everything in technology (and organisation or methodology) we do not have in absolute terms a “right thing to do” versus a “wrong thing to do”; Cloud is not “good” and traditional hosting “bad” but which is the best solution depends on each specific context (being the context the architecture, processes and tools, project methodology, team organisation, available skills and business requirements and objectives).
It is possible to use Cloud services also with traditional business and processes and solutions, but the best achievements are visible when a DevOps organised team, using Agile methodologies is using automation tools to deploy a Micro Services (or at least modular) architecture.
If you have a big monolithic application that requires a fixed immutable capacity necessarily based on a IaaS (virtual machines) model, please make yourself a favour and keep it in traditional hosting; with Cloud you will just get higher costs and probably also lower performances.
The benefits of the Cloud are the ability to continuously deploy, use dynamically allocated resources and to change deployment topologies cross services and geography; this can be achieved with modular solutions, where you can manage a fully distributed design composed by a series of small independent deployments.
With the Cloud you will find yourself creating multiple independent deployments, the number of services and instances will constantly grow.
Also the interaction between services will gradually become more complex.
It is not possible to manage this complexity using documents (as configuration, network or deployment detailed schemas) or to operate manually configurations or updates.
Automation tools, orchestrators and solution to manage service mesh (as Istio) must be used to assure the control of the deployments.
Only operating exclusively via these tools it is possible to guarantee that the status of each deployment is as it was thought to be (and the documentation is inside the tools, as part of the pipeline, scripts and variables that have been actually used to build the deployment) and only knowing the starting status it is possible to apply safely the changes.
Different skills (and so people) from the multidisciplinary team taking care of a project will have to cooperate to define each step of the pipelines for the automation and orchestrations and almost every component of the team will be able to operate it.
Cloud means elasticity. It is easy and quick to activate a new service or resource or to change an existing one. In the Cloud you pay for what you created, so it is important to be effective, in quick cycles create, test and destroy until you find the right setup, without leaving unused and unneeded resources active, knowing you will always be able to recreate them when needed.
DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support. DevOps is also characterized by operations staff making use of many of the same techniques as developers for their systems work. “DevOps is the application of Agile Methodology to System Administration” (Tom Limoncelli, “The Practice of Cloud System Administration”).
Core values: Culture (People, process, tools); Automation (infrastructure as code), Measurement (measure everything), Sharing (collaboration and feedback) – CAMS
DevOps is mostly about breaking down barriers between teams. An enormous amount of time is wasted with tickets sitting in queues, or individuals writing handoff documentation for the person sitting right next to them.
Amazon AWS, Microsoft Azure, Google GCP are clearly recognised globally as the dominant providers. Alibaba, Oracle and IBM (now including Red Hat) are the other competitors, but the distance from the first three is very high. Many other companies tried to enter into the market (as the family Dell\VMWare) but decided to exit because there was no space (and Vmware signed agreements with the cloud providers for hybrid solution services). Other, as Rackspace, decided to just maintain their solution but start to provide professional services on the top three ones to survive.
Cloud is here to stay, adoption of Cloud Services is keep growing with a rate of more than 50% each quarter.
Cloud Services are constantly evolving. Different services are made available every month.
It is very important to spend effort (and time) to analyse for new (but also for existing projects) what changed in the services we were using (new capabilities or new cost model) and which other new services could be a better fit for the projects’ needs.
Each service exists in multiple flavour, for example something basic as a disk space can be provided with very different capabilities (local or geographical redundancies, ability to perform snapshotting, online resizing, native HTTPS or SFTP or Rsync access, etc..), with very different attributes (in terms of number of operations supported, in terms of capacity and available space, etc..) and different SLAs (for example a disk could support up to 500 IOPS but with no guaranteed service and one other could support again 500 IOPS but guaranteed).
You need to spend time to operate the right design of the deployment in order to match the business requirements.
The following are the assumptions you should always have in mind to be ready to design a resilient solution
This is a given, no matter if it is On Premises, physical or virtual, or in Cloud, no matter the vendor or provider you are using, you will always experience infrastructure failures. For this reason you have to design your deployment in order to cope with failures (multiregion, multicloud, graceful degredation).
You must have methods and tools to try to reduce as much as possible the probability for a bug to reach the production environment but bugs will always exists and they will always find their way up to the end user. For this reason you need to have in place tools and procedures to react and correct on a fast way. Do not think “zero bug” but think “fast recovery”.
You need to design your process and tools to minimize the possibility and the impact of human mistakes, but this will never allow to come to a “zero mistake” situation. The process and tools needs to be in place to intercept the mistake and operate corrections.
You will design your solution to be used on a certain way, but one day a person will use it differently, this happen in frontend applications, backoffice applications but also on tools and pipelines for the development and the deployment. Be ready to recognise a different usage pattern and to support or even embrace it.
Always remind that a technology is never “good” or “bad” in “essence”, the starting point to evaluate a technical choice are the business requirements and priorities. No IT project can succeed if it does not have a business need behind, a pure technological refactoring can be approached only if it is directly linked with business value.
The current business context demands to be fast, to reduce the time to market; the current technology context is constantly changing and increasingly complex, removing a clear distinction between software and infrastructure. The only way to be capable to respond to such demands from business and technology is to run small teams that are including all the required skills (and it is more about skills than roles now), so teams that can design, execute and operate without dependencies from external teams.
The giant in the market, as Amazon or Microsoft or Google, are constantly creating services available and other companies are building tools and components, if you have a business need first check if something exist out in the market to be used, but always remind you’ll have to analyse it in deep and integrate it, building something around it or learning how best to use (please remember previous examples about missing backup or monitoring).
Reliability and Security are key in Cloud environments because of the public nature of the deployments (so even more important than in the case of On Premises), it is key to keep these two elements always in mind from the design to the operation phase and to constantly evolve the deployment on the base of the changes and evolution.
The Cloud is an open space, where it is very quick, easy and inexpensive to experiment, don’t miss this opportunity.