Jonathan Donaldson, VP & GM, Cloud and Infrastructure Technologies, Intel Corporation talks about Intel's work in the community to help make Kubernetes ready for the enterprise.
12/12/16
DigitalOcean transitioned from inconsistent deployment tools to using Kubernetes for container orchestration. This improved their ability to deploy new services from hours to minutes. They customized Kubernetes by focusing on stateless services, declarative deployments, and abstracting operational concerns. They created "docc" to simplify Kubernetes usage. It allows describing applications and infrastructure through manifests. Docc helped deploy 50 applications in 6 months and powered an internal hackathon. Lessons included keeping up with Kubernetes' rapid changes and automating cluster management. They will invest in service meshes, network policies, and secure secret storage.
Tectonic Summit 2016: Multitenant Data Architectures with KubernetesCoreOS
This document discusses using Kubernetes to build multitenant data architectures. It notes that software development and data science have distinct lifecycles that are well-served by repeatability. Kubernetes allows packaging of different workloads in containers and scheduling them across clusters, bridging data science and development. Some challenges include ensuring collaboration between container types and adapting workloads not designed for Kubernetes. Overall, Kubernetes provides benefits like resource sharing and self-healing that can form the basis of a multitenant data platform.
Continuous Everything in a Multi-cloud and Multi-platform EnvironmentVMware Tanzu
This document discusses continuous delivery strategies using Pivotal technologies like Pivotal Build Service, Pivotal Container Service, and Spinnaker. Pivotal Build Service allows building Docker images without Dockerfiles using buildpacks. Spinnaker is an open source multi-cloud delivery platform that provides deployment strategies and rollback capabilities. The document demonstrates continuous deployment of a Spring Boot app to PKS using Concourse CI and Spinnaker for deployment automation and monitoring.
The document discusses Cloud Development Kit (CDK) as the next big thing for infrastructure as code (IaC). It provides an overview of IaC and some of its challenges around misconfiguration and security. The introduction of CDK aims to address these challenges by allowing IaC to be implemented as a programming language, inheriting strengths like object-oriented programming and better testing capabilities. Examples are shown for CDK on AWS, Terraform, and Kubernetes to demonstrate how infrastructure can be defined and provisioned code. The document concludes with a proposed practice of using CDK to define cloud infrastructure for a micro-services system from business applications.
Lo Scenario Cloud-Native (Pivotal Cloud-Native Workshop: Milan)VMware Tanzu
This document discusses cloud-native application development. It describes how DevOps practices like continuous delivery and microservices allow for faster, higher quality software development. It introduces a cloud native maturity model and discusses how a platform with the right abstractions can help organizations adopt cloud native patterns. The document outlines Pivotal's platform capabilities and services and how they can help organizations transform applications to be cloud native and achieve outcomes like speed, stability, scalability and security. Real-world examples of organizations adopting cloud native practices are also provided.
Migrating from Self-Managed Kubernetes on EC2 to a GitOps Enabled EKSWeaveworks
Did your company start down the path of building a cloud native platform using Kubernetes with the goal of enabling developers to innovate faster and increase productivity, but then run into challenges keeping it operating in an optimal way?
In this session, Weaveworks will discuss how to migrate from self-managed Kubernetes on EC2 to a GitOps managed Shared Services Platform (SSP) on EKS. A SSP built on EKS and managed with Weave GitOps provides developers and operators with common workflows to update both applications and infrastructure. With every change in version control, full audit trails are available, and security is enforced. While at the same time enabling easier rollbacks and faster mean-time-to-recovery (MTTR). In short, a Weave GitOps managed SSP increases developer velocity while boosting stability.
How to operate a hybrid Kubernetes architecture, using managed EKS in the AWS Cloud and EKS-Distro on premises.
How to structure your infrastructure repository to efficiently manage multiple teams.
How to use Kubernetes RBAC to provide secure cluster multi-tenancy.
How to use GitOps to promote releases across a hybrid set of independent clusters.
How to accomplish data and operational sovereignty.
DigitalOcean transitioned from inconsistent deployment tools to using Kubernetes for container orchestration. This improved their ability to deploy new services from hours to minutes. They customized Kubernetes by focusing on stateless services, declarative deployments, and abstracting operational concerns. They created "docc" to simplify Kubernetes usage. It allows describing applications and infrastructure through manifests. Docc helped deploy 50 applications in 6 months and powered an internal hackathon. Lessons included keeping up with Kubernetes' rapid changes and automating cluster management. They will invest in service meshes, network policies, and secure secret storage.
Tectonic Summit 2016: Multitenant Data Architectures with KubernetesCoreOS
This document discusses using Kubernetes to build multitenant data architectures. It notes that software development and data science have distinct lifecycles that are well-served by repeatability. Kubernetes allows packaging of different workloads in containers and scheduling them across clusters, bridging data science and development. Some challenges include ensuring collaboration between container types and adapting workloads not designed for Kubernetes. Overall, Kubernetes provides benefits like resource sharing and self-healing that can form the basis of a multitenant data platform.
Continuous Everything in a Multi-cloud and Multi-platform EnvironmentVMware Tanzu
This document discusses continuous delivery strategies using Pivotal technologies like Pivotal Build Service, Pivotal Container Service, and Spinnaker. Pivotal Build Service allows building Docker images without Dockerfiles using buildpacks. Spinnaker is an open source multi-cloud delivery platform that provides deployment strategies and rollback capabilities. The document demonstrates continuous deployment of a Spring Boot app to PKS using Concourse CI and Spinnaker for deployment automation and monitoring.
The document discusses Cloud Development Kit (CDK) as the next big thing for infrastructure as code (IaC). It provides an overview of IaC and some of its challenges around misconfiguration and security. The introduction of CDK aims to address these challenges by allowing IaC to be implemented as a programming language, inheriting strengths like object-oriented programming and better testing capabilities. Examples are shown for CDK on AWS, Terraform, and Kubernetes to demonstrate how infrastructure can be defined and provisioned code. The document concludes with a proposed practice of using CDK to define cloud infrastructure for a micro-services system from business applications.
Lo Scenario Cloud-Native (Pivotal Cloud-Native Workshop: Milan)VMware Tanzu
This document discusses cloud-native application development. It describes how DevOps practices like continuous delivery and microservices allow for faster, higher quality software development. It introduces a cloud native maturity model and discusses how a platform with the right abstractions can help organizations adopt cloud native patterns. The document outlines Pivotal's platform capabilities and services and how they can help organizations transform applications to be cloud native and achieve outcomes like speed, stability, scalability and security. Real-world examples of organizations adopting cloud native practices are also provided.
Migrating from Self-Managed Kubernetes on EC2 to a GitOps Enabled EKSWeaveworks
Did your company start down the path of building a cloud native platform using Kubernetes with the goal of enabling developers to innovate faster and increase productivity, but then run into challenges keeping it operating in an optimal way?
In this session, Weaveworks will discuss how to migrate from self-managed Kubernetes on EC2 to a GitOps managed Shared Services Platform (SSP) on EKS. A SSP built on EKS and managed with Weave GitOps provides developers and operators with common workflows to update both applications and infrastructure. With every change in version control, full audit trails are available, and security is enforced. While at the same time enabling easier rollbacks and faster mean-time-to-recovery (MTTR). In short, a Weave GitOps managed SSP increases developer velocity while boosting stability.
How to operate a hybrid Kubernetes architecture, using managed EKS in the AWS Cloud and EKS-Distro on premises.
How to structure your infrastructure repository to efficiently manage multiple teams.
How to use Kubernetes RBAC to provide secure cluster multi-tenancy.
How to use GitOps to promote releases across a hybrid set of independent clusters.
How to accomplish data and operational sovereignty.
Argo Workflows 3.0, a detailed look at what’s new from the Argo TeamLibbySchulze
Argo is a set of Kubernetes-native tools for running and managing jobs and applications on Kubernetes including Argo Workflows, Argo Events, Argo CD, and Argo Rollouts. It started as an open source project incubated at Applatix in 2017 and was accepted as an incubating project at the Cloud Native Computing Foundation in 2020. The Argo community has grown significantly with over 15,000 stars on GitHub and contributions from over 900 code contributors. The upcoming roadmap for Argo Workflows 3.x includes additional workflow authoring capabilities, support for multi-cluster/multi-namespace workflows, and improvements to the developer experience.
DevOps Spain 2019. David Cañadillas -CloudbeesatSistemas
This document discusses using Jenkins X to automate CI/CD pipelines on Kubernetes. It begins by introducing Jenkins X and its capabilities for CI/CD automation on Kubernetes using custom resource definitions. It then discusses how Jenkins X embraces a GitOps model using Git as the source of truth for promoting applications through environments. Finally, it invites the reader to a CloudBees event to learn more about building a continuous software delivery system with Jenkins X.
The document discusses migrating to cloud native solutions. It defines cloud native as an approach that exploits the advantages of cloud computing using containers, microservices, and other modern technologies. This allows applications to be scalable, resilient, and manageable. The document outlines the benefits of cloud native and provides a "trail map" to transitioning applications. It also discusses common challenges like technical debt and failing to meet CI/CD expectations, and provides recommendations to address them such as automating processes and simplifying architectures.
Kubernetes 1.21 included 51 enhancements, including 13 features graduating to stable and 15 graduating to beta. Major themes included CronJobs graduating to stable, immutable secrets and configmaps, dual-stack IPv4/IPv6 support, graceful node shutdown, and the persistent volume health monitor. The 1.22 release timeline was also outlined, with enhancements freeze on May 13th and code freeze on July 8th, targeting August 4th for release. Various SIG updates provided information on enhancements for API machinery, apps, auth, CLI, cloud providers, instrumentation, network, node, scheduling and storage.
The document discusses continuous integration, continuous deployment, and infrastructure as code for modern applications. It describes how AWS services like CodePipeline, CodeBuild, CodeDeploy, and CloudFormation can be used to automate the build, test, and deployment of serverless and containerized applications. Continuous integration ensures code changes are built and tested regularly. Continuous deployment enables automated deployments to staging and production. Modeling infrastructure as code allows infrastructure changes to be released predictably using the same tools as code changes.
This document discusses why cloud native computing matters and provides three case studies. It begins by explaining how infrastructure is changing with the rise of containerization solutions in the 2010s. It then discusses why people use cloud native technologies because they work well and have a great community behind them. Three case studies are presented where companies moved workloads to cloud native solutions on Kubernetes to increase agility, reduce costs, and improve developer productivity. The document concludes by noting that while technology challenges can be solved, changing organizational culture can be the hardest challenge to address.
This document provides an overview of CI/CD on Google Cloud Platform. It discusses key DevOps principles like treating infrastructure as code and automating processes. It then describes how GCP services like Cloud Build, Container Registry, Source Repositories, and Stackdriver can help achieve CI/CD. Spinnaker is mentioned as an open-source continuous delivery platform that integrates well with GCP. Overall the document outlines the benefits of CI/CD and how GCP makes CI/CD implementation easy and scalable.
CWIN17 london becoming cloud native part 2 - guy martin dockerCapgemini
This document discusses how organizations can become cloud native by embracing the full opportunity from cloud. It identifies six key steps: 1) delivering business visible and impactful benefits, 2) technical solutions that deliver the business case, 3) empowering a dedicated cloud services team, 4) creating a cloud service vending machine, 5) establishing a blueprint for integrating cloud into existing IT, and 6) implementing automated application and infrastructure pipelines. It then discusses how Docker can help organizations modernize traditional applications and build a secure software supply chain through containerization.
High-Precision GPS Positioning for Spring DevelopersVMware Tanzu
This document discusses high-precision GPS positioning for Spring developers. It covers GPS fundamentals and hardware, processing positioning data, and visualization. It describes using dual frequency GPS, consuming correction data via NTRIP, and processing NMEA data with libraries like GPSD. The document demonstrates receiving GPS data from an external receiver into a Spring Boot app using Spring Integration, exporting metrics to Prometheus and Grafana, and using QGIS and mobile apps for field data collection and visualization.
Enabling application portability with the greatest of ease!Ken Owens
This document discusses enabling application portability with microservices using Project Shipped. It notes the challenges of developing applications in the digital disruption era across multiple languages, data sources, and clouds. Project Shipped enhances the software development lifecycle to provide continuous integration and deployment of microservices across internal and external clouds. It demonstrates using Mantl and Consul for microservice discovery, load balancing and deployment to multiple environments. The presentation concludes by discussing a proof of concept using Project Shipped and Cisco's CMX API to build and deploy a microservice to different environments.
This document discusses how Citrix Application Delivery Management (ADM) can be used to manage Citrix ADC instances at scale in cloud-native environments. Key points include:
- Citrix ADM allows controlling and gaining insights from one to thousands of Citrix ADC instances (VPX, MPX, CPX), across container platforms like Mesos/Marathon and Kubernetes.
- Metadata from Citrix ADCs provides valuable information to Citrix ADM for an "App Health Score", including user experience metrics, security threats, and device health.
- Citrix ADM provides capabilities for app-centric lifecycles, configuration at scale, visibility, and security across Citrix ADC instances.
This document discusses the challenges of debugging cloud native applications and proposes live debugging as a solution. It notes that traditional debugging methods like command line debugging, local debugging, logging and tracing, and remote debugging have limitations for dynamic cloud environments. Live debugging allows reproducing bugs by creating the original application state and collecting high-fidelity data without performance impacts or needing to know server details. The document introduces Rookout as a platform that enables non-breaking breakpoints to collect live data with no extra coding or redeploys.
This document discusses 4 levels of IoT maturity and how Cloud Foundry can help organizations achieve the highest level of maturity. It begins with an analogy about turning raw data into a gourmet meal using a kitchen and restaurant-style services. It then discusses 3 common problems organizations face with data and proposes Cloud Foundry as a solution. The next section discusses a case study of a medical device company using Cloud Foundry to securely and cost-effectively monitor devices. It concludes by recommending some open-source IoT apps to try on Cloud Foundry.
Comparing Microsoft SQL Server 2019 Performance Across Various Kubernetes Pla...DevOps.com
With the growing adoption of Kubernetes, organizations want to take advantage of containerized Microsoft SQL Server 2019 to optimize transactional performance and accelerate time-to-insights from their business-critical data. However, as enterprises embrace hybrid cloud strategy, they need to consider several aspects based on the performance, cost and data protection requirements for running enterprise-grade SQL Server databases.
In this webinar, we will compare and contrast various cloud-native platforms for SQL Server that would help CIOs, DevOps engineers, database administrators and applications architects to determine the most suitable platform that fits their business needs.
Join us as we explore some exciting results from a recent performance benchmark study conducted by McKnight Consulting Group, an independent consulting firm, to compare the performance of Microsoft SQL Server 2019 on the best possible configurations of the following Kubernetes platforms:
Diamanti Enterprise Kubernetes Platform
Amazon Web Services Elastic Kubernetes Service (AWS EKS)
Azure Kubernetes Service (AKS)
Topics will include:
Platform considerations and requirements for running Microsoft SQL Server 2019
Performance comparison and analysis of running SQL Server on various platform
Best practices for running containerized SQL Server databases in Kubernetes environment
This presentation is to reflect on the amazing advancement of the open source community in the field of Cloud Computing and how does it now allow us to build reliable software components quickly within truly agile infrastructure.
Building Cloud Native Applications Using Azure Kubernetes ServiceDennis Moon
This document provides an overview of building cloud-native applications using Azure Kubernetes Service (AKS). It discusses key concepts like containers, Docker, container registries, Kubernetes, and AKS. It also covers modern application architecture principles and 12-factor applications. Additionally, it defines common Kubernetes objects like pods, services, deployments and explains how to secure applications and monitor clusters deployed to AKS. The document recommends getting started with AKS by deploying sample applications from Azure DevOps to an AKS cluster created in the Azure portal or with the Azure CLI.
Next Generation Vulnerability Assessment Using Datadog and SnykDevOps.com
Vulnerability assessment for teams can often be overwhelming. The dependency graph could be thousands of packages depending on the application. Triaging vulnerability data and prioritizing actions has historically been a very manual process, until now. With Datadog and Snyk, learn how to trace security and performance issues by leveraging continuous profiling capabilities for actionable insight that help developers remediate problems.
Join us on Thursday, January 21 for a unique opportunity to learn more about continuous profiling, vulnerability management, and the benefit to customers from using both of these products. In this webinar, you will:
Bust some myths around continuous profiling and learn how Datadog differentiates itself
See decorated traces in action for sample Java applications and understand how Snyk + Datadog reduce time to triage supply chain vulnerabilities
Learn roadmap information for upcoming public announcements from both partners
This document discusses DataOps, which is an agile methodology for developing and deploying data-intensive applications. DataOps supports cross-functional collaboration and fast time to value. It expands on DevOps practices to include data-related roles like data engineers and data scientists. The key goals of DataOps are to promote continuous model deployment, repeatability, productivity, agility, self-service, and to make data central to applications. It discusses how DataOps brings flexibility and focus to data-driven organizations through principles like continuous model deployment, improved efficiency, and faster time to value.
I Segreti per Modernizzare con Successo le Applicazioni (Pivotal Cloud-Native...VMware Tanzu
This document discusses strategies for modernizing applications to run successfully on cloud platforms like Pivotal Cloud Foundry. It outlines key principles like the Twelve Factor App methodology and establishing clear objectives and metrics. The document also presents a maturity model for applications and an incremental approach to migrating and optimizing existing applications over time. It analyzes which aspects of the Twelve Factors usually require more or less effort during modernization. Finally, it proposes starting the journey by identifying suitable applications and pushing some all the way to production to establish best practices.
How to Make Test Automation for Cloud-based SystemNick Babich
Automated Testing Best Practices and Tips. QA Automation and Test automation process flow. Continuous Delivery, Continuous integration and Test-driven development in cloud-based system. Automatic Deployment and Post-deployment verification. Agile development and quality assurance. Cloud-based telephony service.
Adding Value in the Cloud with Performance TestRodolfo Kohn
This document discusses the importance of performance testing cloud applications and outlines best practices for defining performance requirements, testing methodology, and identifying issues. It provides examples of performance problems found in databases, applications, operating systems, and networks. The key goals of performance testing are to understand system behavior under load, find bottlenecks and hidden bugs, and verify that requirements are met.
Argo Workflows 3.0, a detailed look at what’s new from the Argo TeamLibbySchulze
Argo is a set of Kubernetes-native tools for running and managing jobs and applications on Kubernetes including Argo Workflows, Argo Events, Argo CD, and Argo Rollouts. It started as an open source project incubated at Applatix in 2017 and was accepted as an incubating project at the Cloud Native Computing Foundation in 2020. The Argo community has grown significantly with over 15,000 stars on GitHub and contributions from over 900 code contributors. The upcoming roadmap for Argo Workflows 3.x includes additional workflow authoring capabilities, support for multi-cluster/multi-namespace workflows, and improvements to the developer experience.
DevOps Spain 2019. David Cañadillas -CloudbeesatSistemas
This document discusses using Jenkins X to automate CI/CD pipelines on Kubernetes. It begins by introducing Jenkins X and its capabilities for CI/CD automation on Kubernetes using custom resource definitions. It then discusses how Jenkins X embraces a GitOps model using Git as the source of truth for promoting applications through environments. Finally, it invites the reader to a CloudBees event to learn more about building a continuous software delivery system with Jenkins X.
The document discusses migrating to cloud native solutions. It defines cloud native as an approach that exploits the advantages of cloud computing using containers, microservices, and other modern technologies. This allows applications to be scalable, resilient, and manageable. The document outlines the benefits of cloud native and provides a "trail map" to transitioning applications. It also discusses common challenges like technical debt and failing to meet CI/CD expectations, and provides recommendations to address them such as automating processes and simplifying architectures.
Kubernetes 1.21 included 51 enhancements, including 13 features graduating to stable and 15 graduating to beta. Major themes included CronJobs graduating to stable, immutable secrets and configmaps, dual-stack IPv4/IPv6 support, graceful node shutdown, and the persistent volume health monitor. The 1.22 release timeline was also outlined, with enhancements freeze on May 13th and code freeze on July 8th, targeting August 4th for release. Various SIG updates provided information on enhancements for API machinery, apps, auth, CLI, cloud providers, instrumentation, network, node, scheduling and storage.
The document discusses continuous integration, continuous deployment, and infrastructure as code for modern applications. It describes how AWS services like CodePipeline, CodeBuild, CodeDeploy, and CloudFormation can be used to automate the build, test, and deployment of serverless and containerized applications. Continuous integration ensures code changes are built and tested regularly. Continuous deployment enables automated deployments to staging and production. Modeling infrastructure as code allows infrastructure changes to be released predictably using the same tools as code changes.
This document discusses why cloud native computing matters and provides three case studies. It begins by explaining how infrastructure is changing with the rise of containerization solutions in the 2010s. It then discusses why people use cloud native technologies because they work well and have a great community behind them. Three case studies are presented where companies moved workloads to cloud native solutions on Kubernetes to increase agility, reduce costs, and improve developer productivity. The document concludes by noting that while technology challenges can be solved, changing organizational culture can be the hardest challenge to address.
This document provides an overview of CI/CD on Google Cloud Platform. It discusses key DevOps principles like treating infrastructure as code and automating processes. It then describes how GCP services like Cloud Build, Container Registry, Source Repositories, and Stackdriver can help achieve CI/CD. Spinnaker is mentioned as an open-source continuous delivery platform that integrates well with GCP. Overall the document outlines the benefits of CI/CD and how GCP makes CI/CD implementation easy and scalable.
CWIN17 london becoming cloud native part 2 - guy martin dockerCapgemini
This document discusses how organizations can become cloud native by embracing the full opportunity from cloud. It identifies six key steps: 1) delivering business visible and impactful benefits, 2) technical solutions that deliver the business case, 3) empowering a dedicated cloud services team, 4) creating a cloud service vending machine, 5) establishing a blueprint for integrating cloud into existing IT, and 6) implementing automated application and infrastructure pipelines. It then discusses how Docker can help organizations modernize traditional applications and build a secure software supply chain through containerization.
High-Precision GPS Positioning for Spring DevelopersVMware Tanzu
This document discusses high-precision GPS positioning for Spring developers. It covers GPS fundamentals and hardware, processing positioning data, and visualization. It describes using dual frequency GPS, consuming correction data via NTRIP, and processing NMEA data with libraries like GPSD. The document demonstrates receiving GPS data from an external receiver into a Spring Boot app using Spring Integration, exporting metrics to Prometheus and Grafana, and using QGIS and mobile apps for field data collection and visualization.
Enabling application portability with the greatest of ease!Ken Owens
This document discusses enabling application portability with microservices using Project Shipped. It notes the challenges of developing applications in the digital disruption era across multiple languages, data sources, and clouds. Project Shipped enhances the software development lifecycle to provide continuous integration and deployment of microservices across internal and external clouds. It demonstrates using Mantl and Consul for microservice discovery, load balancing and deployment to multiple environments. The presentation concludes by discussing a proof of concept using Project Shipped and Cisco's CMX API to build and deploy a microservice to different environments.
This document discusses how Citrix Application Delivery Management (ADM) can be used to manage Citrix ADC instances at scale in cloud-native environments. Key points include:
- Citrix ADM allows controlling and gaining insights from one to thousands of Citrix ADC instances (VPX, MPX, CPX), across container platforms like Mesos/Marathon and Kubernetes.
- Metadata from Citrix ADCs provides valuable information to Citrix ADM for an "App Health Score", including user experience metrics, security threats, and device health.
- Citrix ADM provides capabilities for app-centric lifecycles, configuration at scale, visibility, and security across Citrix ADC instances.
This document discusses the challenges of debugging cloud native applications and proposes live debugging as a solution. It notes that traditional debugging methods like command line debugging, local debugging, logging and tracing, and remote debugging have limitations for dynamic cloud environments. Live debugging allows reproducing bugs by creating the original application state and collecting high-fidelity data without performance impacts or needing to know server details. The document introduces Rookout as a platform that enables non-breaking breakpoints to collect live data with no extra coding or redeploys.
This document discusses 4 levels of IoT maturity and how Cloud Foundry can help organizations achieve the highest level of maturity. It begins with an analogy about turning raw data into a gourmet meal using a kitchen and restaurant-style services. It then discusses 3 common problems organizations face with data and proposes Cloud Foundry as a solution. The next section discusses a case study of a medical device company using Cloud Foundry to securely and cost-effectively monitor devices. It concludes by recommending some open-source IoT apps to try on Cloud Foundry.
Comparing Microsoft SQL Server 2019 Performance Across Various Kubernetes Pla...DevOps.com
With the growing adoption of Kubernetes, organizations want to take advantage of containerized Microsoft SQL Server 2019 to optimize transactional performance and accelerate time-to-insights from their business-critical data. However, as enterprises embrace hybrid cloud strategy, they need to consider several aspects based on the performance, cost and data protection requirements for running enterprise-grade SQL Server databases.
In this webinar, we will compare and contrast various cloud-native platforms for SQL Server that would help CIOs, DevOps engineers, database administrators and applications architects to determine the most suitable platform that fits their business needs.
Join us as we explore some exciting results from a recent performance benchmark study conducted by McKnight Consulting Group, an independent consulting firm, to compare the performance of Microsoft SQL Server 2019 on the best possible configurations of the following Kubernetes platforms:
Diamanti Enterprise Kubernetes Platform
Amazon Web Services Elastic Kubernetes Service (AWS EKS)
Azure Kubernetes Service (AKS)
Topics will include:
Platform considerations and requirements for running Microsoft SQL Server 2019
Performance comparison and analysis of running SQL Server on various platform
Best practices for running containerized SQL Server databases in Kubernetes environment
This presentation is to reflect on the amazing advancement of the open source community in the field of Cloud Computing and how does it now allow us to build reliable software components quickly within truly agile infrastructure.
Building Cloud Native Applications Using Azure Kubernetes ServiceDennis Moon
This document provides an overview of building cloud-native applications using Azure Kubernetes Service (AKS). It discusses key concepts like containers, Docker, container registries, Kubernetes, and AKS. It also covers modern application architecture principles and 12-factor applications. Additionally, it defines common Kubernetes objects like pods, services, deployments and explains how to secure applications and monitor clusters deployed to AKS. The document recommends getting started with AKS by deploying sample applications from Azure DevOps to an AKS cluster created in the Azure portal or with the Azure CLI.
Next Generation Vulnerability Assessment Using Datadog and SnykDevOps.com
Vulnerability assessment for teams can often be overwhelming. The dependency graph could be thousands of packages depending on the application. Triaging vulnerability data and prioritizing actions has historically been a very manual process, until now. With Datadog and Snyk, learn how to trace security and performance issues by leveraging continuous profiling capabilities for actionable insight that help developers remediate problems.
Join us on Thursday, January 21 for a unique opportunity to learn more about continuous profiling, vulnerability management, and the benefit to customers from using both of these products. In this webinar, you will:
Bust some myths around continuous profiling and learn how Datadog differentiates itself
See decorated traces in action for sample Java applications and understand how Snyk + Datadog reduce time to triage supply chain vulnerabilities
Learn roadmap information for upcoming public announcements from both partners
This document discusses DataOps, which is an agile methodology for developing and deploying data-intensive applications. DataOps supports cross-functional collaboration and fast time to value. It expands on DevOps practices to include data-related roles like data engineers and data scientists. The key goals of DataOps are to promote continuous model deployment, repeatability, productivity, agility, self-service, and to make data central to applications. It discusses how DataOps brings flexibility and focus to data-driven organizations through principles like continuous model deployment, improved efficiency, and faster time to value.
I Segreti per Modernizzare con Successo le Applicazioni (Pivotal Cloud-Native...VMware Tanzu
This document discusses strategies for modernizing applications to run successfully on cloud platforms like Pivotal Cloud Foundry. It outlines key principles like the Twelve Factor App methodology and establishing clear objectives and metrics. The document also presents a maturity model for applications and an incremental approach to migrating and optimizing existing applications over time. It analyzes which aspects of the Twelve Factors usually require more or less effort during modernization. Finally, it proposes starting the journey by identifying suitable applications and pushing some all the way to production to establish best practices.
How to Make Test Automation for Cloud-based SystemNick Babich
Automated Testing Best Practices and Tips. QA Automation and Test automation process flow. Continuous Delivery, Continuous integration and Test-driven development in cloud-based system. Automatic Deployment and Post-deployment verification. Agile development and quality assurance. Cloud-based telephony service.
Adding Value in the Cloud with Performance TestRodolfo Kohn
This document discusses the importance of performance testing cloud applications and outlines best practices for defining performance requirements, testing methodology, and identifying issues. It provides examples of performance problems found in databases, applications, operating systems, and networks. The key goals of performance testing are to understand system behavior under load, find bottlenecks and hidden bugs, and verify that requirements are met.
Cloud Computing System models for Distributed and cloud computing & Performan...hrmalik20
Advantage of Clouds over Traditional
Distributed Systems,Clouds,Service-Oriented Architecture (SOA) Layered Architecture,Performance Metrics and Scalability Analysis,System Efficiency,Performance Challenges in Cloud Computing,What is cloud computing and why is it distinctive?,CLOUD SERVICE DELIVERY MODELS AND THEIR
PERFORMANCE CHALLENGES,Cloud computing security,What does Cloud Computing Security mean,Cloud Security Landscape,Distinctions between Security and Privacy,Energy Efficiency of Cloud Computing,How energy-efficient is cloud computing?
The document discusses different platforms for deploying microservices using containers including Docker, Kubernetes, AWS ECS, AWS Elastic Beanstalk, OpenShift, and Fabric8. Docker allows deploying containers but does not provide orchestration capabilities. Kubernetes provides orchestration of containers across clusters and can be deployed on-premises or on cloud providers. AWS ECS and Elastic Beanstalk integrate Docker containers with AWS but lack portability. OpenShift is a distribution of Kubernetes that can be used to deploy and manage containerized applications. Fabric8 builds upon Docker and Kubernetes to provide a full Platform as a Service with DevOps capabilities.
With an increasing number of applications being deployed in the cloud, this trend will soon touch performance testers within every organisation. This presentation will dispel the hype, tell you what you need to know to embrace this opportunity, and answer the following questions:
* What are the challenges specifically related to performance testing cloud-based applications?
* What are some common performance problems seen in cloud-based applications, and how can you test for them?
* How will cloud-based load generators help your performance testing?
Don't get left behind! A solid understanding of cloud concepts will be invaluable to your testing career.
This presentation was originally given at Iqnite Australia (Melbourne) on October 16th, 2014.
Containerizing a REST API and Deploying to KubernetesAshley Roach
This document discusses containerizing a REST microservice and deploying it to Kubernetes. It begins by explaining why to build a REST API using Swagger and containerization. It then demonstrates containerizing a sample REST API created with Swagger-node. Finally, it covers deploying the containerized REST API to Kubernetes, including using Kubernetes templates for the deployment and service, and deploying manually or through a CI system.
This document outlines a performance testing strategy for a cloud-based system using an open source testing tool. It describes introducing virtual users gradually from 1 to 3000 to test response times. Response times remained under 5 seconds for up to 1500 users but slowed for 3000 users. Testing showed faster response for high-speed internet and unloaded servers. The strategy successfully tested the system's ability to handle increasing loads in the cloud. Future work could include hosting the testing tool in the cloud and expanding performance analysis.
This document discusses Kubernetes usage at VMware SAAS. It covers dynamic provisioning of applications on Kubernetes, monitoring tools used like DataDog and Log Insight, and best practices for upgrading Kubernetes clusters. Key points include using stateless applications where possible, service discovery using Kubernetes services, dynamic provisioning using an onboarding service, and performing rolling upgrades for stateful applications to minimize downtime.
This document discusses building HTML5 virtual reality apps using Intel XDK. It explains that HTML5 is compelling for cross-platform VR apps because it is cross-platform, collaborative, and allows editing and testing changes quickly. Intel XDK can be used to build HTML5 and Cordova apps, and Cordova APIs allow accessing device features through JavaScript. The document provides examples of how to implement stereoscopic rendering, head tracking, and accessing device features in HTML5 VR apps.
The document provides licensing information and legal disclaimers for any intellectual property related to the materials. It notes that the information on products, services, and processes is subject to change and advises contacting an Intel representative for the latest specifications. The document contains optimization notices for Intel compilers and performance tests on Intel microprocessors.
The document discusses Intel's Network Builders program, which aims to accelerate software-defined infrastructure adoption through open standards and platforms. It does this by investing in strong ecosystems, committing to open source, and leveraging Intel's technology leadership. The program enables partners through technical resources, matchmaking opportunities, and marketing support. It also works with network operators on proofs of concept and trials. The goal is to move the industry from early SDN/NFV trials to commercial deployments through this ecosystem collaboration.
This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019.
Please find the recording here: https://youtu.be/60o3eyG5OLM
This document summarizes Todd Rimmer's presentation on Intel's Omni-Path fabric technology. It discusses Omni-Path innovations like extended link MTUs and management protocols. It also describes Intel's strategy to open source and upstream all Omni-Path host software into the Open Fabrics Alliance, while facing some challenges due to OFA's InfiniBand orientation. Ongoing work includes improving fabric scalability, debug features, and introducing a virtual NIC driver.
The document discusses the future of storage technologies for cloud computing. It notes that cloud adoption is driving significant business opportunities but also increasing complexity. Intel's strategy is to build an open ecosystem, reduce complexity, and enable massive compute capabilities. New storage technologies like SSDs and NVMe can help optimize performance by providing much higher bandwidth and lower latency compared to hard disk drives. For example, using Intel SSDs with NVMe instead of HDDs can provide over 100x cost savings and 1400x power savings while also improving performance for database restart tasks by over 30 times.
TDC2018SP | Trilha IA - Inteligencia Artificial na Arquitetura Inteltdc-globalcode
This document contains several legal notices and disclaimers from Intel regarding their products. No license is granted to any intellectual property and Intel assumes no liability relating to the sale and use of their products. Intel products are not intended for medical or life critical applications. Specifications and descriptions are subject to change without notice.
An easy-to-use, automatic, self-contained toolkit to accelerate ODM* benchmarking NFVi-ready server designs on Intel® Scalable Server platforms based on golden benchmark to characterize baseline performance test on DPDK, QAT and OVS, running on a single Xeon SP server.
This session will describe and demo methods to connect the Intel Edison to Amazon AWS in order to create a versatile IoT structure. The Intel Edison is a powerful system on chip module, the size of a postage stamp with powerful on board processing. It can be used as a sensor hub to gather data, a control board for actuators, and a gateway to connect to the cloud. When combined with the powerful services offered by AWS it can form the basis for many IoT solutions.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Martin Kronberg, Intel oT Evengelist
Intel’s Big Data and Hadoop Security Initiatives - StampedeCon 2014StampedeCon
At StampedeCon 2014, Todd Speck (Intel) presented "Intel’s Big Data and Hadoop Security Initiatives."
In this talk, we will cover various aspects of software and hardware initiatives that Intel is contributing to Hadoop as well as other aspects of our involvement in solutions for Big Data and Hadoop, with a special focus on security. We will discuss specific security initiatives as well as our recent partnership with Cloudera. You should leave the session with a clear understanding of Intel’s involvement and contributions to Hadoop today and coming in the near future.
Tendências da junção entre Big Data Analytics, Machine Learning e Supercomput...Igor José F. Freitas
This document discusses trends in machine learning, big data analytics, and supercomputing. It describes how machine learning is evolving from classic techniques like regression and clustering to deep learning using neural networks. It also discusses how high performance computing and big data analytics are converging, with workloads varying in their resource needs for data and compute. The document outlines Intel's strategy to apply their high performance computing approach to artificial intelligence and machine learning.
This document discusses new hardware features including CAT, COD, and Haswell, as well as network platforms. It provides an overview of run-to-completion and pipeline software models for network processing. Run-to-completion allows I/O and application work to be handled on a single core, while pipeline distributes packets to other cores for application work. Lockless queues are used to share data between cores and threads. Rings are the primary mechanism to move data between software units and I/O sources in DPDK.
A Path to NFV/SDN - Intel. Michael Brennan, INTELWalton Institute
This document discusses Intel's approach to accelerating the adoption of Network Functions Virtualization (NFV) and Software-Defined Networking (SDN). It outlines Intel's open strategy of advancing open source software and standards, delivering open reference designs for Intel platforms, and enabling a broad ecosystem of partners. The goal is to help networking platforms based on Intel architecture replace proprietary, dedicated networking appliances. The document also references Intel's Open Network Platform (ONP) server and switch software reference designs, and examples of trials and deployments Intel is collaborating on with telecom, cloud, and enterprise customers.
Ready access to high performance Python with Intel Distribution for Python 2018AWS User Group Bengaluru
This document discusses Intel's Intel Distribution for Python (IDP) which aims to advance Python performance closer to native code speeds. IDP provides prebuilt and optimized packages for Python that leverage Intel performance libraries to accelerate numerical computing, machine learning, and data analytics workloads. It also includes tools like Intel VTune Amplifier for profiling Python applications to identify optimization opportunities.
Cloud Technology: Now Entering the Business Process Phasefinteligent
Cloud technology is moving into its next phase of business use. [1] Cloud models are entering the "business process" phase of delivering services. [2] Cloud technologies can now generate higher returns for businesses. [3] Consulting with Intel can help optimize cloud solutions.
Preparing the Data Center for the Internet of ThingsIntel IoT
Intel’s Mark Skarpness provides an overview of the Internet of Things and discusses how the data center is essential for the IoT.
For more information go to www.intel.com/iot
It’s not news to anyone in IT that container technology has become one of the fastest growing areas of innovation, facilitating ease of packaging and consistent deployment environments for applications. If you’re in IT, you are also likely familiar with Kubernetes—the leading container orchestration platform.
This advanced technology session will cover the integration of Nutanix Enterprise Cloud OS platform with Kubernetes. Binny Gill, Nutanix Chief Architect, and Allan Naim, Google Product Manager, will guide you through how Kubernetes is enabled by Google in GKE and by Nutanix on-premises, to provide a simple, consistent, and hybrid platform for all your containerized apps.
Similar to Tectonic Summit 2016: It's Go Time (20)
Tectonic Summit 2016: Multi-Cluster Kubernetes: Planning for UnknownsCoreOS
- Concur is a travel and expense management company with 6500+ employees and offices worldwide. They process over 70 million transactions and $50 billion in travel and expense spend annually.
- The presenter is a Principal Architect at Concur who has been working with Kubernetes since 2015. He discusses why Concur chose Kubernetes and CoreOS for container orchestration.
- Concur runs multiple Kubernetes clusters across different regions for high availability. A custom tool called kube2cnqr manages load balancing between clusters.
Tectonic Summit 2016: Networking for Kubernetes CoreOS
Sreekanth Pothanis, Cloud Engineering, eBay shares a networking Kubernetes tale from the trenches.
Networking is the hardest component in any ones infrastructure, everything depends on it. Specifically when we have web scale infrastructure with tens of thousands of servers. eBay is investing heavily in Kubernetes and networking again is one of the areas we have the most difficulty with.
During the course of this talk we will go through various approaches we tried to make container networking conform to Kubernetes networking principles, while ensuring that it adapts to the existing networking models our infrastructure supports.
We would also cover how we have automated the process of setting up networking for Kubernetes clusters and how it offers seamless integration with non-Kubernetes workloads.
12/12/16
Tectonic Summit 2016: Brandon Philips, CTO of CoreOS, KeynoteCoreOS
The document discusses CoreOS's expertise across the technology stack for container-based applications. This includes Linux, container engines, container image specifications, clustered databases like etcd, cloud independence, identity federation, and more. CoreOS is focused on open standards through initiatives like the Open Container Initiative and ensuring technologies like Kubernetes, rkt, and etcd can scale to power large production deployments.
Tectonic Summit 2016: Alex Polvi, CEO of CoreOS, KeynoteCoreOS
CoreOS has renamed their CoreOS Linux distribution to Container Linux to better reflect its purpose of running containers. Container Linux uses self-driving mechanisms to automatically keep systems updated without downtime. CoreOS Tectonic is also now self-driving Kubernetes, making it easier to manage Kubernetes clusters. Both Container Linux and Tectonic are aimed at allowing users to spend less time on maintenance and more on innovation.
Tectonic Summit 2016: Kubernetes 1.5 and BeyondCoreOS
Kubernetes 1.5 introduces several new features to simplify cluster setup and improve scheduling. It provides an easy way to initialize a Kubernetes cluster with a single command using kubeadm. Multiple clusters can also be easily federated together using kubefed. Additionally, Kubernetes 1.5 enhances scheduling capabilities with taints and tolerations, which allow pods to be selectively scheduled to nodes based on hardware requirements like GPUs. This helps optimize workload placement on large, heterogeneous clusters.
Xiang Li gave a presentation on etcd, a distributed key-value store. He discussed how etcd can be used to coordinate CoreOS cluster updates and store application configurations. He highlighted requirements like strong consistency, high availability, and watchability. Li also demonstrated etcd's capabilities including key-value operations, streaming watches, multi-version concurrency control, and leases. He showed how etcd achieves high performance and reliability through techniques such as incremental snapshots, write-ahead logging, and failure injection testing. Finally, he announced that etcd version 3.0 beta is now available.
This document discusses Kube-AWS, which is a tool for deploying Kubernetes clusters on AWS. It outlines the design goals of creating artifacts that are secure, reproducible, and auditable. It then demonstrates "under the hood" how Kube-AWS works by initializing a cluster configuration, rendering assets, deploying the cluster, exporting the deployment details, and making changes to reproduce the cluster. Recent work is noted along with future plans.
- Clair is an open source project for analyzing container images for known software vulnerabilities. It uses static analysis to detect vulnerabilities by examining the content of container images without running the containers.
- Clair's analysis can be done once and reused to inform about current and future vulnerabilities. It also suggests fixes and notifies users about new vulnerabilities.
- Clair is designed as an extensible framework, including detectors for vulnerabilities from different sources, datastores, updaters, notifiers and support for multiple container formats and operating systems. The presenter discusses current and potential future capabilities.
Tectonic Summit 2015: Containers Across the Cloud and Data CenterCoreOS
At Tectonic Summit in December 2015, Rob Cornish, CTO of International Securities Exchange, and Paul Morgan, Systems Architect, International Securities Exchange spoke about how they use containers across the cloud and data center.
The last decade belonged to virtual machines and the next one belongs to containers. CoreOS is a new Linux distribution designed specifically for application containers and running them at scale. This talk will examine all the major components of CoreOS (etcd, fleet, docker, systemd) and how these components work together.
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
1. Jonathan Donaldson
Vice President, Data Center Group
GM, Software Defined Infrastructure
Intel Corp.
It’s ‘Go’ Time: Taking
Kubernetes
to the Enterprise
10. p to 800%
2015: Memory Functions
(memmove, memset, indexof, etc)
2016: More Memory, Crypto,
Compiler, Intel XED
Cool!
11. A foundation for Testing code
at Scale
The CNCF Community Cluster
Gain access to 1K node cluster
Optimize for scale
Accepts projects that will upstream
code and benefit the industry
https://github.com/cncf/cluster
Other names and brands may be claimed as the property of others.
12. Intel® Cloud Summit 2016
OpenStack control plane services run on RKT
OpenStack compute node runs on RKT
K8s and OpenStack aware of each other
(nova-kubernetes-drain)
OpenStack upstream solution w/ Kolla
Full support for K8S volume plug-in for Cinder
in coreos-kubernetes
PoC of Live Migration from OpenStack to K8S on Nova
Tectonic stack
OPTIMIZATIONS
13. Intel® Cloud Summit 2016
VM grade security for Rkt
QEMU support enabled in Rkt
Container Network Interface (CNI) support for
RKT/KVM (in dev)
E2e conformance tests enabled on Bare Metal
Commits in K8S repo present
The security and isolation of
Intel® VT-x for containers.
14. Intel® Cloud Summit 2016
SNAP Telemetry
Collect, process, and publish telemetry
data at scale.
Over 70 plugins w/ libraries for C++, Python, Golang
Intelligent metric selection w/ Dynamic Metrics
Package/automation support
Distributed workflows, unified logging, CRON support,
build automation, and more
https://github.com/intelsdi-x/snap
15. Thank You
Other names and brands may be claimed as the property of others.
Accelerate Cloud Native to Mainstream
Deploy and Manage Solutions with Kubernetes
Test Apps at Scale with the CNCF Community Cluster
Undeniable shift – container orchestration is the future. Why? Dev Efficiency, microservices, hybrid cloud/ portability. According to 451, interest in containers & production adoption has more than doubled since last year, and is positioned to see wide adoption in 2017. We’re encouraged and excited to see how quickly this trend has hit the mainream – KubeCon – small to sold out.
This is happening fast. Virtualization has existed for more than 30 years. Only 10 years since first publlic cloud offering. 2008 – no one believed it would go mainstream. Now rapid evolution of containers, DevOps, CICL, uptick in open source. Containers have gone from buzz word to being ready for mass adoption.
Kuberentes is seeing massive growth - has eclipsed Docker in production: K8S 40.2K commits – Swarm 3.2K – Mesos 10.8K.
Tectonic can be used to run container-based workloads across a variety of cloud services or within an organization's own datacenter. Great to see it evolve into a turnkey distro for k8s.
The wide adoption of the cloud is foundationa and the majority of these examples are now using k8sl. As more enterprises move to cloud, especially private cloud for enterprises, more adopt DevOps, containers. Enable CI/CL, running infrastructure as code.
Private k8S support: VW, Wells Fargo, AutoDesk, (BMW using OpenStack, no mentioned of k8s yet)
Tier 2: Huawei – launched it’s own k8s based container engine, Tata Comms
Public: Google, Azure, Facebook, Baidu
Wide support across clouds solutions, from public to private. AWS w/ CoreOS support as of Nov 15, Azure Nov 2016 (Brenden Burns recently joined), GCP starting 2014. OpenStack 2016 (& Stackanetes), Cloud Foundry Sept. 2016. Common orchestrator across clouds, will enable migrations and hybrid environments.
http://thenewstack.io/coreos-is-funding-kubernetes-development-on-aws/
http://thenewstack.io/cloud-foundrys-service-broker-api-role-in-kubernetes-and-open-source-platforms/
Why Intel – also a software house. Last year we began collaborating with CoreOS and contributing to K8S – dev collaboration, upstream code, OpenStack, Stackanetes, optimizing Intel hardware. The data center is central to innovation, so we’re extending value of DC hardware with great software like k8s, and optimizing our hardware to run k8s better. Huge benftis from IA optimized solutions – efficiency, agility, interoperabilty, cost savings.
We know how to do this. Intel builds the world’s fasted and most efficient data centers, including Google. Intel and CoreOS share a common vision - make it easy for enterprises to see the same benefits in their private clouds as Google does (GIFEE). Move with velocity & high reliability. Apps always on, and fast. From snapchat to robots conducting surgery, we are building and running the apps for an always on, always connected world.
We can’t do it alone. We are bringing the industry together to accelerate technology innovation. Make GIFFEE easier to adopt, democratized.
Last year, we had announce Go optimization of 200%. Quadrupled this year. Go runtime targeting memory/string and crypto functions with AVX and AVX2 – from 40% to 800%. Open-sourced Intel XED library to help automated asm-objdump generation for Go toolchain to enable all instructions, including Skylake server. General high-level compiler optimizations up to 10%
A few examples of recent users…
1.) IBM – openstack container mgmt. at scale – results shared in Barcelona OpenStack Summit.
2.) RedHat OpenShift: OpenShift performance + K8S scalability.3.) BOINC - contribute to curing Zika virus in IBM World Community Grid.
11 new in the queue…
Apply today and bring the industry forward
Note – max is actually 400 nodes now, and around 800 in Q1 2017
Optimizing Openstack and k8s to work together
Enhancing Rkt containers with VM level security with Intel VT-x. Speed + Security root of trust.
Also, Snap telemetry. We announced Snap here last year, and just released general availability of Snap 2 weeks ago. Monitor infrastructure capabilities, utilization, and events in real time to make automation and orchestration easier and more intelligent. 6 plugins for OpenStack (Nova, Neutron, Cinder, Keystone, Ceph). “kubesnap” K8s daemon PoC. Production ready.