A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Slow performance and unavailable critical applications can impinge a company’s progress. You can apply patches and updates to improve application quality and user experience, but these changes need to be tested in resource-intensive environments before deployment. Keeping these applications accessing data is vital, too, as on-premises events can put availability at risk.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. As we added VMs designed for test/dev environments, the production workload maintained an acceptable level of IOPS and achieved an average storage latency of less than a millisecond. The solution also kept data highly available with no downtime and no performance drop when we initiated a lost host connection for the primary storage. To run critical database applications of your company, consider the Dell EMC VMAX 250F for your datacenter.
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...Principled Technologies
Upgrading the hardware running your SQL Server to a space-efficient modular Dell EMC modern environment can help your company achieve a great deal of database work in a small amount of space. With the Dell Express Flash technology, adding a caching solution such as Samsung AutoCache can make the environment even more efficient.
In the PT labs, we ran a mixed database workload on six Dell EMC PowerEdge FC630 servers, powered by Intel Xeon E5-2667 processors, in three PowerEdge FX2 enclosures. The solution included the QLogic QLE2692 16Gb FC adapter with StorFusion Technology, Dell EMC Storage SC9000 all-flash storage, and Dell EMC PowerEdge Express Flash NVMe Performance PCIe SSDs.
With no caching solution, the 36 SQL Server 2016 VMs on the six servers achieved a total of 431,839 orders per minute while an Oracle workload ran on 12 VMs. When we added a caching solution to accelerate the SQL database volumes, the performance across the 36 SQL Server 2016 VMs doubled to 871,580. These numbers show the power of server-side caching to alleviate pressure on the storage array allowing you to get even more out of the Dell EMC modern environment.
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...Principled Technologies
Moving to the virtualized, software-defined datacenter can offer real benefits to today’s organizations. As our testing showed, virtualizing business-critical applications with VMware vSphere, VMware Virtual SAN, and VMware NSX not only delivered reliable performance in a peak utilization scenario, but also delivered business continuity during and after a simulated site evacuation.
Using this VMware Validated Design with QCT hardware and Intel SSDs, we demonstrated a virtualized critical Oracle Database application environment delivering strong performance, even when under extreme duress.
Recognizing that many organizations have multiple sites, we also proved that our environment performed reliably under a site evacuation scenario, migrating the primary site VMs to the secondary in just over eight minutes with no downtime.
With these features and strengths, the VMware Validated Design SDDC is a proven solution that allows for efficient deployment of components and can help improve the reliability, flexibility, and mobility of your multi-site environment.
Ensure greater uptime and boost VMware vSAN cluster performance with the Del...Principled Technologies
The Dell EMC PowerEdge MX with VMware vSAN Ready Nodes delivered a 55.9% faster response time than a Cisco UCS solution and a 41.3% faster response time than an HPE Synergy solution
A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Slow performance and unavailable critical applications can impinge a company’s progress. You can apply patches and updates to improve application quality and user experience, but these changes need to be tested in resource-intensive environments before deployment. Keeping these applications accessing data is vital, too, as on-premises events can put availability at risk.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. As we added VMs designed for test/dev environments, the production workload maintained an acceptable level of IOPS and achieved an average storage latency of less than a millisecond. The solution also kept data highly available with no downtime and no performance drop when we initiated a lost host connection for the primary storage. To run critical database applications of your company, consider the Dell EMC VMAX 250F for your datacenter.
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...Principled Technologies
Upgrading the hardware running your SQL Server to a space-efficient modular Dell EMC modern environment can help your company achieve a great deal of database work in a small amount of space. With the Dell Express Flash technology, adding a caching solution such as Samsung AutoCache can make the environment even more efficient.
In the PT labs, we ran a mixed database workload on six Dell EMC PowerEdge FC630 servers, powered by Intel Xeon E5-2667 processors, in three PowerEdge FX2 enclosures. The solution included the QLogic QLE2692 16Gb FC adapter with StorFusion Technology, Dell EMC Storage SC9000 all-flash storage, and Dell EMC PowerEdge Express Flash NVMe Performance PCIe SSDs.
With no caching solution, the 36 SQL Server 2016 VMs on the six servers achieved a total of 431,839 orders per minute while an Oracle workload ran on 12 VMs. When we added a caching solution to accelerate the SQL database volumes, the performance across the 36 SQL Server 2016 VMs doubled to 871,580. These numbers show the power of server-side caching to alleviate pressure on the storage array allowing you to get even more out of the Dell EMC modern environment.
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...Principled Technologies
Moving to the virtualized, software-defined datacenter can offer real benefits to today’s organizations. As our testing showed, virtualizing business-critical applications with VMware vSphere, VMware Virtual SAN, and VMware NSX not only delivered reliable performance in a peak utilization scenario, but also delivered business continuity during and after a simulated site evacuation.
Using this VMware Validated Design with QCT hardware and Intel SSDs, we demonstrated a virtualized critical Oracle Database application environment delivering strong performance, even when under extreme duress.
Recognizing that many organizations have multiple sites, we also proved that our environment performed reliably under a site evacuation scenario, migrating the primary site VMs to the secondary in just over eight minutes with no downtime.
With these features and strengths, the VMware Validated Design SDDC is a proven solution that allows for efficient deployment of components and can help improve the reliability, flexibility, and mobility of your multi-site environment.
Ensure greater uptime and boost VMware vSAN cluster performance with the Del...Principled Technologies
The Dell EMC PowerEdge MX with VMware vSAN Ready Nodes delivered a 55.9% faster response time than a Cisco UCS solution and a 41.3% faster response time than an HPE Synergy solution
A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
3 key wins: Dell EMC PowerEdge MX with OpenManage Enterprise over Cisco UCS a...Principled Technologies
In head-to-head tests, the modular Dell EMC™ PowerEdge™ MX7000 with
OpenManage™ Enterprise reduced admin time and effort on repetitive tasks when compared to Cisco UCS® 5108 with Cisco UCS Manager and HPE Synergy with OneView.
Your datacenter is capable of doing great things—if you let it. Upgrades from Intel for compute, storage, and networking components can help your business support new services and expand your customer base. In our hands-on testing, we found that new Intel processors, high-bandwidth network components, and SATA or PCIe SSDs working together can boost your datacenter’s capabilities, which could translate to better business operations for your organization.
Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...Principled Technologies
Moving compute-intensive, Hadoop big data workloads to current-generation Dell EMC PowerEdge R640 servers powered by 2nd Generation Intel Xeon Scalable processors could allow your organization to better meet the data analysis challenges of today. Faster analysis of large data sets means getting insight into your organization, products, and services sooner, which could help your organization grow and beat its competition.
A single-socket Dell EMC PowerEdge R7515 solution delivered better value on a...Principled Technologies
If your company is running important business applications in VMware vSAN clusters of servers that are several years old, chances are good that you’re considering upgrading to newer hardware. Our testing demonstrated that our clusters of single-socket Dell EMC PowerEdge R7515 servers and clusters of dual-socket HPE ProLiant DL380 Gen10 servers could both improve upon the database performance of a legacy cluster with five-year-old servers by more than 50 percent, with the Dell EMC cluster achieving 93.4 percent of the performance of the HPE cluster.
Keep your data safe by moving from unsupported SQL Server 2008 to SQL Server ...Principled Technologies
Many small and medium businesses delay updating their server operating systems and applications. When software reaches end-of-support and the vendor ceases to release security updates and patches, businesses that fail to migrate risk incurring downtime and expense. They may encounter technical problems, and their vital data becomes especially vulnerable to cyber attackers, who often target outdated software.
Get higher transaction throughput and better price/performance with an Amazon...Principled Technologies
In addition, the EBS gp3-backed EC2 r5b.16xlarge instance delivered a lower average transaction latency to offer more consistent transactional database performance than two Microsoft Azure E64ds_v4 VM configurations
Spend less time, effort, and money by choosing a Dell EMC server with pre-ins...Principled Technologies
Deploying a Dell EMC PowerEdge R740 with pre-installed Microsoft Windows Server 2016 Standard took less time and fewer steps than deploying the same server without it
With new automation tools, tying up your administrator’s time with repetitive processes can become a thing of the past. Our tests showed how Dell ASM, with the ability to build deployment templates, can save significant administrator time and steps compared to a solution that lacks these features. In an age where business IT demands grow rapidly, providing administrators with the right tools to manage their virtualized infrastructure is critical for keeping your datacenter running efficiently.
Keep remote desktop power users productive with Dell EMC PowerEdge R840 serve...Principled Technologies
When the Dell EMC™ PowerEdge™ R840 launched, we found that companies could get more power for their CPU-intensive workloads with this 2U four-socket rack server.1 Now, it presents an opportunity for you to support more power users, speed desktop responsiveness, and grow your employee base.
Dell PowerEdge R920 running Oracle Database: Benefits of upgrading with NVMe ...Principled Technologies
Strong server performance is essential to companies running Oracle Database. The new Dell PowerEdge R920 provides strong performance in its base configuration with 24 SAS hard disks, but this performance gets an enormous boost when running the configuration containing NVMe Express Flash PCIe SSDs. In our testing, the upgraded configuration of the Dell PowerEdge R920 delivered 14.9 times the database performance of the base configuration. In addition, in testing the raw I/O throughput of the NVMe Express Flash PCIe SSDs, we saw as much as 192.8 times the IOPS as compared to the base configuration. Given that the storage subsystem is critical in servers and specifically database applications, the performance improvements offered by NVMe Express Flash PCIe SSDs can lead to great service improvements for your customers, making this upgrade a very wise investment.
Watch your transactional database performance climb with Intel Optane DC pers...Principled Technologies
Dell EMC PowerEdge R740xd servers with Intel Optane DC persistent memory handled more transactions per minute than configurations with NAND flash NVMe drives or SATA SSDs
Prepare images for machine learning faster with servers powered by AMD EPYC 7...Principled Technologies
A server cluster with 3rd Gen AMD EPYC processors achieved higher throughput and took less time to prepare images for classification than a server cluster with 3rd Gen Intel Xeon Platinum 8380 processors
Reap better SQL Server OLTP performance with next-generation Dell EMC PowerEd...Principled Technologies
These new servers achieved up to 36.1 percent more OLTP database work than current-generation Dell EMC PowerEdge MX servers, while also lowering application response time
Create useful data center health visualizations with Dell iDRAC Telemetry Ref...Principled Technologies
Dell EMC™ has recently released Telemetry Reference tools for the Integrated Dell Remote Access Controller (iDRAC). The Telemetry Reference toolset enables data center engineers to ingest and structure telemetry data streams for visualization with Elastic Stack software. This report serves as a how-to guide for setting up and configuring these components within your own environment.
The combination of 16th Generation Dell PowerEdge servers and VMware vSphere 8.0 provides a resilient and efficient solution for organizations running containerized applications. Upgrading to the hardware and software combination provides helpful features that increase workload availability and streamline management. We found Workload Availability Zones, ClusterClass, and node image customization features to function well and as we expected, thus aiding both development and infrastructure teams in a DevOps context. Deploying vSphere 8.0 on Dell PowerEdge servers enables organizations to take advantage of the benefits of containerization and modernize their IT infrastructure.
A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
3 key wins: Dell EMC PowerEdge MX with OpenManage Enterprise over Cisco UCS a...Principled Technologies
In head-to-head tests, the modular Dell EMC™ PowerEdge™ MX7000 with
OpenManage™ Enterprise reduced admin time and effort on repetitive tasks when compared to Cisco UCS® 5108 with Cisco UCS Manager and HPE Synergy with OneView.
Your datacenter is capable of doing great things—if you let it. Upgrades from Intel for compute, storage, and networking components can help your business support new services and expand your customer base. In our hands-on testing, we found that new Intel processors, high-bandwidth network components, and SATA or PCIe SSDs working together can boost your datacenter’s capabilities, which could translate to better business operations for your organization.
Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...Principled Technologies
Moving compute-intensive, Hadoop big data workloads to current-generation Dell EMC PowerEdge R640 servers powered by 2nd Generation Intel Xeon Scalable processors could allow your organization to better meet the data analysis challenges of today. Faster analysis of large data sets means getting insight into your organization, products, and services sooner, which could help your organization grow and beat its competition.
A single-socket Dell EMC PowerEdge R7515 solution delivered better value on a...Principled Technologies
If your company is running important business applications in VMware vSAN clusters of servers that are several years old, chances are good that you’re considering upgrading to newer hardware. Our testing demonstrated that our clusters of single-socket Dell EMC PowerEdge R7515 servers and clusters of dual-socket HPE ProLiant DL380 Gen10 servers could both improve upon the database performance of a legacy cluster with five-year-old servers by more than 50 percent, with the Dell EMC cluster achieving 93.4 percent of the performance of the HPE cluster.
Keep your data safe by moving from unsupported SQL Server 2008 to SQL Server ...Principled Technologies
Many small and medium businesses delay updating their server operating systems and applications. When software reaches end-of-support and the vendor ceases to release security updates and patches, businesses that fail to migrate risk incurring downtime and expense. They may encounter technical problems, and their vital data becomes especially vulnerable to cyber attackers, who often target outdated software.
Get higher transaction throughput and better price/performance with an Amazon...Principled Technologies
In addition, the EBS gp3-backed EC2 r5b.16xlarge instance delivered a lower average transaction latency to offer more consistent transactional database performance than two Microsoft Azure E64ds_v4 VM configurations
Spend less time, effort, and money by choosing a Dell EMC server with pre-ins...Principled Technologies
Deploying a Dell EMC PowerEdge R740 with pre-installed Microsoft Windows Server 2016 Standard took less time and fewer steps than deploying the same server without it
With new automation tools, tying up your administrator’s time with repetitive processes can become a thing of the past. Our tests showed how Dell ASM, with the ability to build deployment templates, can save significant administrator time and steps compared to a solution that lacks these features. In an age where business IT demands grow rapidly, providing administrators with the right tools to manage their virtualized infrastructure is critical for keeping your datacenter running efficiently.
Keep remote desktop power users productive with Dell EMC PowerEdge R840 serve...Principled Technologies
When the Dell EMC™ PowerEdge™ R840 launched, we found that companies could get more power for their CPU-intensive workloads with this 2U four-socket rack server.1 Now, it presents an opportunity for you to support more power users, speed desktop responsiveness, and grow your employee base.
Dell PowerEdge R920 running Oracle Database: Benefits of upgrading with NVMe ...Principled Technologies
Strong server performance is essential to companies running Oracle Database. The new Dell PowerEdge R920 provides strong performance in its base configuration with 24 SAS hard disks, but this performance gets an enormous boost when running the configuration containing NVMe Express Flash PCIe SSDs. In our testing, the upgraded configuration of the Dell PowerEdge R920 delivered 14.9 times the database performance of the base configuration. In addition, in testing the raw I/O throughput of the NVMe Express Flash PCIe SSDs, we saw as much as 192.8 times the IOPS as compared to the base configuration. Given that the storage subsystem is critical in servers and specifically database applications, the performance improvements offered by NVMe Express Flash PCIe SSDs can lead to great service improvements for your customers, making this upgrade a very wise investment.
Watch your transactional database performance climb with Intel Optane DC pers...Principled Technologies
Dell EMC PowerEdge R740xd servers with Intel Optane DC persistent memory handled more transactions per minute than configurations with NAND flash NVMe drives or SATA SSDs
Prepare images for machine learning faster with servers powered by AMD EPYC 7...Principled Technologies
A server cluster with 3rd Gen AMD EPYC processors achieved higher throughput and took less time to prepare images for classification than a server cluster with 3rd Gen Intel Xeon Platinum 8380 processors
Reap better SQL Server OLTP performance with next-generation Dell EMC PowerEd...Principled Technologies
These new servers achieved up to 36.1 percent more OLTP database work than current-generation Dell EMC PowerEdge MX servers, while also lowering application response time
Create useful data center health visualizations with Dell iDRAC Telemetry Ref...Principled Technologies
Dell EMC™ has recently released Telemetry Reference tools for the Integrated Dell Remote Access Controller (iDRAC). The Telemetry Reference toolset enables data center engineers to ingest and structure telemetry data streams for visualization with Elastic Stack software. This report serves as a how-to guide for setting up and configuring these components within your own environment.
The combination of 16th Generation Dell PowerEdge servers and VMware vSphere 8.0 provides a resilient and efficient solution for organizations running containerized applications. Upgrading to the hardware and software combination provides helpful features that increase workload availability and streamline management. We found Workload Availability Zones, ClusterClass, and node image customization features to function well and as we expected, thus aiding both development and infrastructure teams in a DevOps context. Deploying vSphere 8.0 on Dell PowerEdge servers enables organizations to take advantage of the benefits of containerization and modernize their IT infrastructure.
Make Kubernetes containers on Dell EMC PowerEdge R740xd servers easier to man...Principled Technologies
Running VMware Tanzu on a VMware vSphere 7.0 Update 1 environment with Dell EMC PowerEdge servers provided centralized container management features at reasonable cost
The Carrier DevOps Trend (Presented to Okinawa Open Days Conference)Alex Henthorn-Iwane
Telecom carriers are adopting DevOps practices to complement new SDN and NFV network architectures. This presentation to the Okinawa Open Days 2014 conference talks about why this is so, how carriers are going about it, and some best practices.
WKS401 Deploy a Deep Learning Framework on Amazon ECS and EC2 Spot InstancesAmazon Web Services
Deep learning is an implementation of machine learning that uses neural networks to solve difficult and complex problems, such as computer vision, natural language processing, and recommendations. Due to the availability of deep learning libraries and frameworks, developers have the ability to enhance the capabilities of their applications and projects.
In this workshop, you learn how to build and deploy a powerful deep learning framework called MXNet on containers. The portability and resource management benefit of containers means developers can focus less on infrastructure and more on building. The labs start by demonstrating the automation capabilities of AWS CloudFormation to stand up core infrastructure; as an added bonus, you use Spot Fleet to leverage the cost benefits of using Spot Instances, especially for developer environments. Then, you walk through creating an MXNet container in Docker and deploying it with Amazon ECS. Finally, you walk through an image classification demo of MXNet to validate that everything is working as expected.
Pre-reqs: Laptop and AWS account
Getting Started with Platform-as-a-ServiceCloudBees
A short introduction to Platform-as-a-Service, hsowing you to use CloudBees PaaS to develop, test and run your Java and other web applications in the Cloud
Give DevOps teams self-service resource pools within your private infrastruct...Principled Technologies
Give DevOps teams self-service resource pools within your private infrastructure with Dell Technologies APEX cloud and storage solutions. We confirmed several real-world use cases in a VMware vSphere with Tanzu environment
Building internal developer platform with EKS and GitOpsWeaveworks
An internal developer platform (IDP) is a set of standardized tools and technologies that enables development teams to self-service, offering convenient access to resources they need to create and deploy compliant code. The ultimate goal is to facilitate automation, autonomy and productivity across large teams. However, creating an IDP is highly complex, especially when bridging hybrid scenarios. In fact, build timelines can take anywhere between one to two years!
In this Techstrong Learning Experience, we will discuss how platform engineers can more efficiently build an IDP with Amazon EKS and Weave GitOps and accelerate cloud-native adoption while speeding up migration of existing applications to the cloud.
Our experts will also introduce EKS Blueprints, a collection of infrastructure-as-code (IaC) modules like Terraform and AWS Cloud Development Kit (AWS CDK) that will help you configure and deploy consistent EKS clusters across on-premises and cloud.
Key Takeaways:
- Why you should build a self-service IDP
- How to leverage EKS, GitOps and EKS Blueprints to build your IDP
- A review of use cases and benefits of an IDP
Kubo (Cloud Foundry Container Platform): Your Gateway Drug to Cloud-nativecornelia davis
You’re at the Cloud Foundry Summit, which means you are by definition a cloud-native enthusiast. There’s no question that building apps in this architectural style will produce resilient, scalable software in an agile manner, and allow you to operate it far more efficiently than you’ve been able to in the past. But you’ve also got a whole lot of software in your company’s portfolio that isn’t there yet. Do you have to resign yourself to the pains of managing those applications the old way until you can finally refactor them to be cloud-native? Kubo to the rescue.
You can run legacy applications on Kubo without significant refactoring – pure and simple. As an added bonus, it allows you to satisfy the CIO mandate of running containers (check). But it’s far more than that – running those workloads on Kubo offers advantages over running them on traditional virtualized infrastructure. This session covers those advantages –resource consolidation, health management, multi-cloud and more. It will also present the abstractions in Kubernetes, things like pods and stateful sets, that support running legacy workloads in the cloud environments that are far more distributed and changing than they have been in the past. It’s a first step to cloud-native.
Kubo (Cloud Foundry Container Platform): Your Gateway Drug to Cloud-nativeVMware Tanzu
You’re at the Cloud Foundry Summit, which means you are by definition a cloud-native enthusiast. There’s no question that building apps in this architectural style will produce resilient, scalable software in an agile manner, and allow you to operate it far more efficiently than you’ve been able to in the past. But you’ve also got a whole lot of software in your company’s portfolio that isn’t there yet. Do you have to resign yourself to the pains of managing those applications the old way until you can finally refactor them to be cloud-native? Kubo to the rescue.
You can run legacy applications on Kubo without significant refactoring – pure and simple. As an added bonus, it allows you to satisfy the CIO mandate of running containers (check). But it’s far more than that – running those workloads on Kubo offers advantages over running them on traditional virtualized infrastructure. This session covers those advantages –resource consolidation, health management, multi-cloud and more. It will also present the abstractions in Kubernetes, things like pods and stateful sets, that support running legacy workloads in the cloud environments that are far more distributed and changing than they have been in the past. It’s a first step to cloud-native.
About the Talk:
Cloud native ecosystem is bringing a huge change in the way of DevOps in every cloud native organisation. Developers and operators in cloud native organisations are using tools and platforms like Kubernetes to achieve the agility promised by DevOps and microservices. The tools and best practices for stateless applications have been well established and the results can be seen in the agility of teams using these stateless applications. However, stateful applications pose new challenges to DevOps teams in achieving the agility as the best practices around persistent storage management are still emerging. In this talk, first we discuss the challenges faced by DevOps while dealing with persistent storage handling in stateful applications. Then we discuss the open source tools and best practices for DevOps teams to achieve data agility of cloud native applications.
As more and more enterprises look at leveraging the capabilities of public clouds, they face an array of important decisions. for example, they must decide which cloud(s) and what technologies they should use, how they operate and manage resources, and how they deploy applications.
During the “Architecting for the Cloud” breakfast seminar where we discussed the requirements of modern cloud-based applications and how to overcome the confinement of traditional on-premises infrastructure.
We heard from data management practitioners and cloud strategists from Amazon Web Services and NuoDB about how organizations are meeting the challenges associated with building new or migrating existing applications to the cloud.
Finally, we discussed how the right cloud-based architecture can:
- Handle rapid user growth by adding new servers on demand
- Provide high performance even in the face of heavy application usage
- Offer around-the-clock resiliency and uptime
- Provide easy and fast access across multiple geographies
- Deliver cloud-enabled apps in public, private, or hybrid cloud environments
Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment. This provides many of the benefits of loading an application onto a virtual machine, as the application can be run on any suitable physical machine without any worries about dependencies.
Workshop; Deploy a Deep Learning Framework on Amazon ECS and Spot InstancesAmazon Web Services
by Asif Khan, Technical Business Development Manager, AWS
Deep learning is an implementation of machine learning that uses neural networks to solve difficult and complex problems, such as computer vision, natural language processing, and recommendations. Due to the availability of deep learning libraries and frameworks, developers have the ability to enhance the capabilities of their applications and projects. In this workshop, you learn how to build and deploy a powerful deep learning framework called MXNet on containers. The portability and resource management benefit of containers means developers can focus less on infrastructure and more on building. The labs start by demonstrating the automation capabilities of AWS CloudFormation to stand up core infrastructure; as an added bonus, you use Spot Fleet to leverage the cost benefits of using Spot Instances, especially for developer environments. Then, you walk through creating an MXNet container in Docker and deploying it with Amazon ECS. Finally, you walk through an image classification demo of MXNet to validate that everything is working as expected. Note: This workshop focuses on containerizing MXNet. The features of MXNet and capabilities of deep learning in general are vast, and there are recorded sessions from re:Invent that dive deeper on these topics. All you need to participate is a laptop and AWS account. Pizza will be provided. Level 300
AWS re:Invent 2016: Workshop: Deploy a Deep Learning Framework on Amazon ECS ...Amazon Web Services
Deep learning is an implementation of machine learning that uses neural networks to solve difficult and complex problems, such as computer vision, natural language processing, and recommendations. Due to the availability of deep learning libraries and frameworks, developers have the ability to enhance the capabilities of their applications and projects. In this workshop, you learn how to build and deploy a powerful deep learning framework called MXNet on containers. The portability and resource management benefit of containers means developers can focus less on infrastructure and more on building.
The labs start by demonstrating the automation capabilities of AWS CloudFormation to stand up core infrastructure; as an added bonus, you use Spot Fleet to leverage the cost benefits of using Spot Instances, especially for developer environments. Then, you walk through creating an MXNet container in Docker and deploying it with Amazon ECS. Finally, you walk through an image classification demo of MXNet to validate that everything is working as expected.
All you need to participate is a laptop and AWS account.
Similar to Give DevOps teams self-service resource pools within your private infrastructure with Dell Technologies APEX cloud and storage solutions (20)
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
Get in and stay in the productivity zone with the HP Z2 G9 Tower WorkstationPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to Dell Precision 3660 and 5860 tower workstations in optimized performance modes
Conclusion
HP Z2 G9 Tower Workstation users can change the BIOS settings to dial in the performance mode that best suits their needs: High Performance Mode, Performance Mode, or Quiet Mode. In good
news for both creative and technical professionals, we found that an Intel Core i9-13900 processor-powered HP Z2 G9 Tower Workstation set to High Performance mode received higher CPU-based benchmark scores than both a similarly configured Dell Precision 3660 and a Dell Precision 5860 equipped with an Intel Xeon w5-2455x processor. Plus, the HP Z2 G9 Tower Workstation was quieter while running CPU-intensive Cinebench 2024 and SPECapc for Solidworks 2022 workloads than both Dell Precision tower workstations. This means HP Z2 G9 Tower Workstation users who prize performance over everything else can do so without sacrificing a quiet workspace.
Open up new possibilities with higher transactional database performance from...Principled Technologies
In our PostgreSQL tests, R7i instances boosted performance over R6i instances with previous-gen processors
If you use the open-source PostgreSQL database to run your critical business operations, you have many cloud options from which to choose. While many of these instances can do the job, some can deliver stronger performance, which can mean getting a greater return on your cloud investment.
We conducted hands-on testing with the HammerDB TPROC-C benchmark to see how the PostgreSQL performance of Amazon EC2 R7i instances, enabled by 4th Gen Intel Xeon Scalable processors, stacked up to that of R6i instances with previous-generation processors. We learned that small, medium-sized, and large R7i instances with the newer processors delivered better OLTP performance, with improvements as high as 13.8 percent. By choosing the R7i instances, your organization has the potential to support more users, deliver a better experience to those users, and even lower your cloud operating expenditures by requiring fewer instances to get the job done.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
Give DevOps teams self-service resource pools within your private infrastructure with Dell Technologies APEX cloud and storage solutions
1. Give DevOps teams self-service resource
pools within your private infrastructure
with Dell Technologies APEX cloud and
storage solutions
We confirmed several real-world use cases in
a VMware vSphere with Tanzu environment
The fast-paced needs of modern application development and delivery have
pushed some companies to subscribe to public cloud-based solutions such
as Amazon Web Services (AWS). It can be frustrating for a DevOps team to
wait for a separate IT team to approve every individual resource request.
Beyond frustration, this fragmented approach can waste time and money as
the teams deliberate over resource usage instead of innovating in their own
areas. What some companies don’t realize, however, is that ITOps teams can
use VMware®
vSphere®
technology to deliver public-cloud-like self-service to
private cloud solutions such as those in the Dell Technologies APEX portfolio.
At Principled Technologies, we validated key use cases and functionality
for a solution comprising Dell Technologies APEX Private Cloud and APEX
Data Storage Services running a VMware vSphere with Tanzu environment.
The solution allowed us to set up namespace self-service that could enable
DevOps teams to create resources within an ITOps-defined quota; set up
VM and Kubernetes cluster self-service that dev teams can use to create
and destroy clusters and VMs at will; and configure storage policies within
VMware vSphere that dev teams could then use to assign storage with
specific quality of service (QoS) to clusters and VMs. We then verified the
functionality of each of these features by using them to create a simple
containerized application and VM appliance and ensuring that the two could
communicate with each other. Though each feature we tested would directly
empower a theoretical DevOps team, setting up these capabilities would
allow ITOps teams to establish limits up front without the need to approve
individual DevOps requests.
Enable DevOps to
provision and manage
their own resources
Namespace, VM, and cluster
self-service enables DevOps to
build and tear down Kubernetes®
clusters and VMs at will
Create custom
storage policies
that leverage a variety of
classes and tiers of storage
Avoid cloud
cost spirals
with on-premises
managed hardware
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022
A Principled Technologies report: Hands-on testing. Real-world results.
2. How we tested
Table 1 shows basic details of the environment we used for testing. For full details on our test environment,
including hardware components, see the science behind this report.
Table 1: Our testing environment
Component Purpose
APEX Private Cloud Infrastructure-as-a-service (IaaS) environment
built upon Dell EMC VxRail hyperconverged
infrastructure with VMware vSphere and vSAN™
.
VMware Tanzu Containers-as-a-service functionality with
Kubernetes orchestration built into the vSphere
hypervisor. Together with APEX
Private Cloud, this provides an on-premises,
as-a-service VMware private cloud environment
VMware vSphere with Tanzu Software-defined data center technology that
serves as the basis for the environment
APEX Data Storage Services Optional Block Services storage attached
to VMware vSphere with Tanzu
Fictional scenario
A fictional company is modernizing its approach to applications. Their app DevOps team wants to implement a
containerized mobile app that runs as a set of microservices in the containerized environment. Those containers will
also need to access a legacy data source running in a VM appliance. It’s a complex task, but the DevOps team is
more than capable of rising to the challenge. There’s only one problem: their toolset.
To hit their aggressive schedules, the DevOps team wants more self-service features for provisioning resources
during development and testing.
Sean, the ITOps team lead,
has some concerns with the
DevOps request and thinks
they can accomplish everything
the DevOps team wants with the tools they
already have. After conferring with his team, Sean
plans to build a test solution to demonstrate that
VMware vSphere with Tanzu together with APEX
Private Cloud and APEX Data Storage Services
can deliver a DevOps-empowering solution right
from the company’s existing on-premises cloud.
Aylea, the mobile app DevOps
team lead, has requested
permission to implement a
development sandbox using
AWS so her team can easily experiment, stand up
and tear down Kubernetes clusters, and configure
tools to support the new containerized mobile
app without the resource request process that can
interrupt the DevOps team’s flow and creativity.
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 2
3. Management use case sections
Figure 1 shows the main use cases we used to deliver and verify DevOps self-service in a VMware vSphere with
Tanzu environment on APEX Private Cloud and APEX Data Storage Services solutions.
Ultimately, we were able to set up and confirm each use case without issue. The following sections cover these
use cases in more detail and demonstrate how they could each help a hypothetical company modernize its
mobile applications.
Management use cases
Configuring storage policies within VMware vSphere with Tanzu
Setting up namespace self-service
Setting up VM and Kubernetes cluster self-service
Confirming self-service use:
✓ Creating and deleting Kubernetes clusters within a namespace
✓ Creating and deleting VMs within a namespace
✓ Provisioning Kubernetes clusters and VMs with higher QoS
storage for production
✓ Configuring an application environment
Figure 1: Summary of the management uses cases we investigated.
Aylea’s DevOps team thinks they need a public cloud solution to accomplish their goals,
but as our report will show, the fictional company already has everything it needs at its
disposal. Currently, they maintain a highly virtualized VMware vSphere environment and
use APEX Private Cloud for IaaS and APEX Data Storage Services to scale their storage
with configurable service levels. Can their ITOps team use the existing solution to deliver
a self-service app development environment to the DevOps team?
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 3
4. Configuring storage policies
We set up storage policies within the VMware
vSphere with Tanzu environment that enabled us
to leverage different storage classes and assign
low, medium, and high storage tiers to different
resources within the environment.
Aylea’s DevOps team wants to be
able to set guarantees on storage
performance and fault-tolerance
minimums for their test environment
without first needing to go through the
ITOps team, which would involve an
approval process that may slow down development.
Without the ability to guarantee certain levels of
performance, the DevOps team’s test environment
could have unreliable performance that makes it
difficult to optimize their mobile app.
Though Aylea thinks cloud storage may be the most
viable solution, Sean knows that he can enable the
DevOps team to assign storage tiers to their test
resources from within their current VMware
vSphere with Tanzu environment.
Namespace self-service
would allow Aylea’s DevOps
team to set the pace of their
own resource usage, without
needing to formally request
resources from ITOps—that’s
one big reason why they originally wanted to use
a public cloud solution.
With a public cloud solution, Sean’s ITOps team
would be concerned about the ensuing cost
spirals should DevOps provision more resources
than their budget allows. With a private cloud
solution, ITOps would still be concerned that
DevOps’ resource usage for the test environment
could eat into their production resources. This
could cause QoS degradation on the company’s
production apps and negatively affect the end-
user experience for client-facing work.
VMware vSphere allows Sean to set up a self-
service namespace with overall resource limits
and let Aylea’s DevOps team choose how they
want to spend it.
Setting up namespace self-service
We set up a VMware namespace service that could
enable a hypothetical app development team to
provision their own VMware vSAN storage resources
within a test environment. We then set up a high-
performance, high-QoS production namespace that
a hypothetical company could use in its production
environment. Finally, we verified that we could use
namespaces to establish resource quotas for VMs,
memory, and disk usage.
IT administrators can enable the VMware namespace
service with a total resource quota for all resources
within that namespace. Once a development team
creates their namespace, a vCenter operator will need
to provision the namespaces with CPU, memory,
storage quotas, and storage tiers (if necessary). After
establishing a quota, the dev team is free to provision
clusters and VMs within the established resource limits.
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 4
5. On the day of the demonstration,
Sean walks Aylea and the rest of the
DevOps team through the solution
he’s designed. Though skeptical at
first, Aylea is able to confirm that
all of the functionality the DevOps team wanted is
actually already possible with the APEX solutions the
company already uses. With the ITOps demonstration
environment, Aylea’s team can use their designated
namespace to stand up and wipe away Kubernetes
clusters at will; they can use VM self-service to stand up
and tear down VMs within that namespace; and they
can use storage policies to provision clusters with high-
QoS tier storage suitable for a production environment.
As a final test, Aylea configures a containerized
app and VM appliance, and enables the two to
communicate with each other. On a very basic level,
this is how Aylea envisions the app will ultimately
behave. Because ITOps demonstrated that the
desired functionality exists within the APEX solutions
the company currently runs, Aylea no longer believes
they need a public cloud dev environment to get the
job done. Go team!
VM and cluster self-service would enable Aylea’s DevOps team to create clusters and
VMs for app infrastructure without needing to first request resources from the ITOps
team. It’s important for the DevOps team to have access to VMs so they can develop
the infrastructure required to link their new, modernized app to the company’s legacy
monolithic data service. But Sean’s team is again concerned that without proper
approval from ITOps, the DevOps team’s resource usage might exceed their limits
and begin to degrade performance for their services already in production.
Because the company’s VMware vSphere with Tanzu environment allows for self-
service, Aylea’s DevOps team will be able to quickly provision required resources on
an ad hoc basis while remaining within the resource limits that Sean’s team sets.
Setting up VM and cluster self-service
Modern development workflows may require devs to use both containers and VMs. We set up VM and cluster
self-service that could enable a hypothetical development team to provision and deprovision both Kubernetes
clusters and VMs using the self-service namespaces we configured earlier. This self-service approach could save
both time and money for dev and IT teams, as each could focus on their own needs rather than processing
infrastructure requests.
Confirming self-service use
After configuring storage policies for
APEX Data Storage Services and setting
up namespace and VM self-service,
we confirmed that a hypothetical
DevOps team would be able to use
these features to implement an app
using self-service. We created and
destroyed Kubernetes clusters and VMs
within the dev namespace we set up,
and we were able to assign storage
with high quality of service to each
of those resources. Additionally, we
configured a containerized application
and a VM appliance and enabled each to
communicate with the other.
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 5
6. Conclusion
Giving DevOps teams the power to manage resources at will can provide flexibility and speed to the
application modernization process, save valuable time (and therefore money) by avoiding the resource
approval process, and give IT administrators peace of mind by enabling them to set overall resource limits
that the DevOps team won’t be able to exceed. By deploying a modern applications environment in an
existing private cloud, organizations can avoid having to manage multiple environments and can use a
consistent governance framework to control resource usage and policy compliance. This can optimize
resource utilization through a unified pool of capacity/performance and service levels, thereby reducing
management complexity and the need for multiple sets of specialized skills.
At Principled Technologies, we successfully used a VMware vSphere with Tanzu environment running on
Dell Technologies APEX Private Cloud and APEX Data Storage Services solutions to perform the following
real-world tasks:
• Set up self-service namespaces
that can enable DevOps teams
to create resources within
established quotas
• Set up self-service for VMs and
Kubernetes clusters, which can
enable dev teams to create and
destroy said resources at will
• Set up storage policies that
can allow dev teams to assign
storage with specific QoS
to VMs and clusters
With these abilities, your DevOps team can enjoy self-service in their private IT infrastructure while ITOps
maintains overall control of resources. To learn more about VMware vSphere with Tanzu on Dell Technologies
solutions, visit www.delltechnologies.com/tanzu.
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 6
7. The science behind the report
The following sections describe what we tested, how we tested, and what we found.
We concluded our hands-on testing on November 9, 2021. The results in this report reflect configurations that
we finalized on October 22, 2021 or earlier. Unavoidably, these configurations may not represent the latest
versions available when this report appears.
System configuration information
Table 2: Detailed information on the system we tested.
System configuration information 4 x Dell EMC™
VxRail E560F
Version VxRail 7.0.240-27141857
BIOS version 2.11.2
Non-default BIOS settings APEX default
Operating system name and version/build number VMware ESXi™
, 7.0.2, 18426014
Date of last OS updates/patches applied APEX default
Processor
Number of processors 2
Vendor and model Intel®
Xeon®
Gold 6212U
Core count (per processor) 24
Core frequency (GHz) 2.40
Stepping B1 (SRF9A)
Memory module(s)
Total memory in system (GB) 256
Number of memory modules 4
Vendor and model Samsung®
M393A8G40MB2-CVF
Size (GB) 64
Type DDR4
Speed (MHz) 2,933
Speed running in the server (MHz) 2,933
Storage controller
Vendor and model Dell HBA330
Cache size (GB) 0
Distributed storage
Storage type vSAN
Number of nodes 4
Number of disk groups 4
Number of disks per disk group 4
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 7
8. System configuration information 4 x Dell EMC™
VxRail E560F
Network adapter
Vendor and model Mellanox®
ConnectX®
-4 MT27710 Family
Number and type of ports 2 x 25GbE
Firmware version 14.28.45.12
Cooling fans
Vendor and model APEX default
Number of cooling fans APEX default
Power supplies
Vendor and model APEX default
Number of power supplies 2
Wattage of each (W) APEX default
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 8
9. How we tested
Setting up Storage Policy Based Management (SPBM)
Reviewing the native VMware vSAN™
policy
1. Inside the workload vCenter host, navigate to Storage Policy Based Management by following this path: Menu Policies and Profiles.
2. To see the policy rules, select vSAN Default Storage Policy from the list.
Creating new tiered storage policies
1. Select Create.
2. Select the appropriate vCenter account to create the Policy.
3. Give the policy a name. We used ADSS-high
4. Select Enable rules for DELLEMC.ADSS.VVOL storage.
5. Click Next.
6. From the dropdown menu, select High.
7. Review compatible Storage.
8. Click Next.
9. Review the settings, and click Finish.
10. Review the VM Storage Policy. In the center-right area of the screen, the Storage Compatibility tab shows you that the storage policy is
indeed targeting the external storage you chose.
11. To create the Medium and Low quality of service (QoS) value options, complete steps 1 through 10 twice more.
Setting up self-service namespaces
In the current iteration of the VMware namespace service, there is only the option for a single overall namespace service template
configuration. Meaning, you enable the namespace service with a set resource quota that serves as the overarching quota. After developers
create individual namespaces, the vCenter operator must be notified and will need to go into those namespaces and provision them
accordingly with CPU, memory, and storage quotas as well as storage tiers, if necessary. Currently, there is no namespace service template
option that can provision individual custom namespaces created in this manner ahead of time; however, once the quota is established, the
developer is free to work within those limits setting up clusters or VMs. Any new namespaces created at the CLI will need to go through the
operator to provision quotas.
Enabling the namespace service
1. Log into vCenter, and navigate to Menu Host and Clusters Configure Namespaces General.
2. Expand the Namespace Service.
3. Toggle the Namespace Service Status from Inactive to Active.
4. Enter values for Configuration template. We used the following:
• CPU - 128 Ghz
• Memory - 128 Gb
• Storage - 128 Gb
• Storage Policy - vSAN default
5. Review the values, and click Next.
6. Select an Identity Source.
7. For the identity source, select either vsphere.local or an AD/LDAP user if one is connected.
8. Review the namespace template, and click Finish.
9. Review the details of the namespace service. The namespace self-service status should show as activated.
Installing kubectl
1. Install kubectl:
snap install kubectl --classic
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 9
10. Getting the link for the Kubernetes CLI Tools download page
1. Using the vSphere client, log into the vCenter Server.
2. Navigate to vSphere Cluster > Namespaces, and select the vSphere Namespace where you are working.
3. Select the Summary tab, and locate the Status area on this page.
4. Beneath the heading for Link to CLI Tools, to open the download page, select Open. (Or, you can copy the link.)
Installing Kubernetes CLI Tools
1. Using a browser, navigate to the Kubernetes CLI Tools download URL for your environment.
2. Select the operating system.
3. Download thevsphere-plugin.zipfile.
4. Extract the contents of the ZIP file to a working directory.
5. Add the location of both executables to your system’s PATH variable.
6. Verify the installation of kubectl:
kubectl
7. Verify the installation of the vSphere Plugin for kubectl:
kubectlvsphere
Creating a namespace from CLI
1. Navigate to Menu Host and Clusters Configure Namespaces General.
2. Toward the bottom of the list is a field for Link to CLI tools. To open this link in a new tab, choose Open. This is the IP address of the
server you will be logging into for future CLI commands.
3. At the CLI, use the IP address for logging into the --server field:
kubectl vsphere login -u demouser@vsphere.local --server=100.80.28.241 --insecure-skip-tls-verify
4. If the context you wish to use is not in this list, you may need to try logging in again later, or contact your cluster administrator.
5. To continue, switch your context:
kubectl config use-context 100.80.28.241
Creating the namespace
1. Create the namespace:
kubectl create namespace demo1
2. To see the new namespace you have created, log out and log back into the vSphere kubectl CLI:
kubectl vsphere logout
kubectl vsphere login -u demouser@vsphere.local --server=100.80.28.241 --insecure-skip-tls-verify
3. You can now use the new context:
kubectl config use-context demo1
4. To verify that the demo1 context is mapped to the demo1 namespace, list the contexts:
kubectl config get-contexts
Verifying the namespace in vCenter
1. Navigate to Menu Host and Clusters Namespaces. You will now see the namespace you created at the CLI.
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 10
11. Enabling and provisioning the VM and Kubernetes cluster services in VMware Tanzu
In a modern DevOps workflow, a developer will need access to both container orchestration tools (such as Kubernetes) and traditional
standalone VMs. Whether the purpose is to utilize a VM-based appliance style application/database or to spearhead the migration to
microservices, having powerful development tools delivered to your team as a service will undoubtedly save time and money, freeing
them to focus on development rather than infrastructure requests.
Creating the Content Library for VM images
To make VMs available for your team members to spin up, we first need to get the VM images and insert them as an item into a content
library for the VM service to reference.
1. In the top left, click the vSphere client to reach the shortcuts dashboard.
2. Click Content Libraries. If you have Tanzu enabled, you should see the TKG library.
3. To create a new library, click Create.
4. Name the new library VM service.
5. Click Next.
6. In the Configure content library menu, select Local content library, and click Next.
7. In the Add storage section, select the default vSAN storage array, and click Next.
8. In the Ready to complete section, review your choices, and click Finish.
9. View the Content Libraries section, and verify that the library was indeed created.
Importing the Centos VM item into the content library
You can download the VM image we used during testing from the following website:
https://marketplace.cloud.vmware.com/services/details/vm-service-image-for-centos1111?slug=true
1. In the Content Libraries menu, select the VM Service content library you created earlier.
2. From the Action dropdown menu, select Import Items.
3. Select Local file, and click Upload files.
4. On your local machine, select the OVA image file to upload.
5. Verify that the file uploaded successfully. In the lower left, you should now see the new imported item in the OVA templates window.
Adding the demouser and permissions
The demouser should already exist. We now need to permit that user to our newly created Tanzu namespace.
1. From the dropdown menu, select Hosts and Clusters, and select the newly created demo1 namespace.
2. Select the permissions tab.
3. Click Add.
4. Select the identity source you will use. We used vsphere.local.
5. In the User field, type the username demouser.
6. From the Role field, choose Can edit.
7. Click OK.
8. Verify that the demo user was added to this namespace.
9. In the namespace summary, the Permissions tile should now show users and their roles.
Adding tiered storage policies to the namespace
These steps describe how to pull the storage policies that you created earlier into the namespace you just created, so that a hypothetical
dev team can choose which storage best suits their needs when building clusters and VMs.
1. In the namespace summary, on the Storage tile, click Edit Storage.
2. Select the three storage tiers you created: ADSS High, Medium, and Low.
3. Click OK.
4. The Storage tile should now show a small summary of storage policies.
Adding VM classes for VM creation
1. In the namespace summary, from the VM service tile, choose Manage VM classes.
2. Select the classes you wish to deploy. We chose the three basic "best effort" sizes.
3. Click OK.
4. The VM Service tile should now display the number of available VM classes.
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 11
12. Adding a content library to the VM service
The following steps provide the service with the VM image that we uploaded earlier.
1. From the VM Service tile, select Manage Content Libraries.
2. Select the VM service content library you created previously that contains the Centos VM image you retrieved from VMware.
3. Click OK.
4. The VM Service tile should now display the number of available content libraries.
Adding storage capacity limits for tiered storage
1. In the namespace summary, from the Capacity and Usage tile, select Edit Limits.
2. Expand the storage section. You should see the storage tiers you created previously.
3. Choose your desired limits. We chose the following:
• adss-high: 500 Gb
• adss-medium: 200 Gb
• adss-low: 100 Gb
4. Click OK.
5. Return to the namespace summary and review section.
Creating the Tanzu Kubernetes cluster (tkc)
Bringing up the cluster
1. If you are not already logged in, log into the namespace as the demouser:
kubectl vsphere login -u demouser@vsphere.local --server=100.80.28.241 --insecure-skip-tls-verify
2. Set the context to the demo1 context that you created earlier.
kubectl config use-context demo1
3. Create a yaml file that has the following contents. Save this file as tkc1.yml:
tkc1.yml
apiVersion: run.tanzu.vmware.com/v1alpha1 #TKGS API endpoint
kind: TanzuKubernetesCluster #required parameter
metadata:
name: tkc1 #cluster name, user defined
namespace: demo1 #vsphere namespace
spec:
distribution:
version: v1.20 #Resolves to the latest v1.18 image
settings:
storage:
# defaultClass: vsan-default-storage-policy defaultClass: ADSS-low
topology:
controlPlane:
count:
1 #number of control plane nodes
class: best-effort-small
#vmclass for control plane nodes
storageClass: ADSS-low #storageclass for control plane
workers:
count: 2 #number of worker nodes
class: best-effort-small #vmclass for worker nodes
storageClass: ADSS-low #storageclass for worker nodes
4. Apply the tkc cluster object:
kubectl apply -f tkc1.yml
tanzukubernetescluster.run.tanzu.vmware.com/tkc1 created
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 12
13. 5. Check the status of tkc:
kubectl get tkc
NAME CONTROL PLANE
WORKER
DISTRIBUTION
AGE
PHASE
TKR COMPATIBLE
UPDATES
AVAILABLE
tkc1 1
2
v1.20
41s
creating
True
6. You can also retrieve more details from the 'describe' command output. The bottom of the output displays the most recent
status and events:
kubectl describe tkc tkc1
...
...
Node Status:
tkc1-control-plane-4bvtj: ready
tkc1-workers-qbffs-5bf887c966-cpbkm: pending
tkc1-workers-qbffs-5bf887c966-tdfg5: pending
Phase: creating
Vm Status:
tkc1-control-plane-4bvtj: ready
tkc1-workers-qbffs-5bf887c966-cpbkm: pending
tkc1-workers-qbffs-5bf887c966-tdfg5: pending
Events: <none>
7. If all is well, you should see everything up and running after a few minutes, depending on the number of nodes in your topology choice:
Node Status:
tkc1-control-plane-4bvtj: ready
tkc1-workers-qbffs-5bf887c966-cpbkm: ready
tkc1-workers-qbffs-5bf887c966-tdfg5: ready
Phase: running
Vm Status:
tkc1-control-plane-4bvtj: ready
tkc1-workers-qbffs-5bf887c966-cpbkm: ready
tkc1-workers-qbffs-5bf887c966-tdfg5: ready
8. Because the tkc you created lies in a lower level of administration and privileges, you will need to log out of the kubctl vsphere session,
and log back in with some additional arguments:
kubectl vsphere logout
The login now targets the namespace and the cluster that you created. You should now have access to the new tkc1 context.
9. Set the context to use tkc:
kubectl config use-context tkc1
10. Verify the context:
kubectl config get-contexts
11. Check the nodes:
kubectl get nodes
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 13
14. Deploying a web-server workload to your cluster
1. Add the docker hub credentials to your cluster. If you are pulling images from the public docker hub, you'll need to create a docker
account and a personal access token. Docker recently changed the way images are pulled. The only way to pull multiple images
frequently is to set up a personal or a company account.
https://docs.docker.com/docker-hub/access-tokens/
2. Once you have created your docker hub account and have your access token, you will need to create a secret in Kubernetes that allows
your worker nodes to authenticate and pull from docker hub. The PASSWORD field should contain your access token:
kubectl create secret docker-registry regcred --docker-username=XXXXXX --docker-
password=XXXXXXXXXXXXXXXXXXXXX --docker-email=XXXXXXXXXX
Adding a pod security policy role to vSphere authenticated users
1. To deploy workloads, your "authenticated" user must be added to the cluster role that has system-level privileges. The following
command creates a new role bind and assigns the authenticated users the privileged pod security policy role of psp:vmware-system-
privileged. Without this, your deployments will stall.
kubectl create clusterrolebinding default-tkg-admin-privileged-binding --clusterrole=psp:vmware-
system-privileged --group=system:authenticated
2. Create a webserver deployment file from the following code. Save this file as nginx-deployment.yml
nginx-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
env: frontend
spec:
containers:
- name: nginx
image: nginx ports:
- containerPort: 80 imagePullSecrets:
- name: regcred
3. Create a load balancer service file from the following code. Save this file as lb-nginx-svc.yml
lb-nginx-svc.yml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: nginx
env: frontend
name: nginx-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
status:
loadBalancer: {}
Apply web server and load balancer objects:
kubectl apply -f nginx-deployment.yml
kubectl apply -f lb-nginx-svc.yml
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 14
15. 4. Verify the webserver pod creation:
kubectl get pods
5. Verify the load balancer external IP:
kubectl get svc
6. Using the GUI, verify the webserver. (Nginx comes with a default homepage that you can reach via the external IP. Use the GUI to reach
that page now.)
7. Log out of the tkc:
kubectl vsphere logout
Creating VMs using the VM service
Preparing the VM cloud-init file
When spinning up a VM, devs often want some commands or customized configuration executed at run-time so that when they log in,
the VM is prepared and ready for their needs. Typically, a dev will create a cloud-init or user-data file in advance that will send this block of
configuration commands to the VM during spin up. The following steps guide you through creating the cloud-init file, encoding its contents,
and inserting the encoded contents into a Tanzu VM object deployment file.
1. Log into the Tanzu Kubernetes demo1 namespace (note that we are logging into the namespace level and not the lower tkc level):
kubectl vsphere login -u demouser@vsphere.local --server=100.80.28.241 --insecure-skip-tls-verify
2. To continue, switch your context:
kubectl config use-context demo1
3. Create an ssh key:
ssh-keygen -t rsa
4. To skip past each of the prompts, press Enter.
5. Output the contents of the public key to a text file:
cat ~/.ssh/id_rsa.pub
The expected output is:
ssh-rsa AAAPI1yFTrmtg/rJIRVXLBFj9a1GvimhfgjqzhHU+1G+8d9+d/6ef3cqxTccQwMNs1f6Nb0Zf2R/2Z
1A0=... you@your_host
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 15
16. 6. Create the cloud-init file from the following contents. Save this file as cloud-init.yml
cloud-init.yml
#cloud-config
## Required syntax at the start of user-data file
## Create a user called centos and give a password of VMware1! and set it to not
expire chpasswd:
list: | centos:VMware1!
expire: false
## Create a docker user group on the OS
groups:
- docker users:
## Create the default user for the OS
- default
## Customise the centos user created above by adding an SSH key that's allowed to
login to the VM
## In this case, it's the SSH public key of my laptop
- name: centos ssh-authorized-keys:
- YOUR-PUBLIC-SSH-KEY-GOES-HERE
## Add the centos user to the sudo group and allow it to escalate to sudo without a password
sudo: ALL=(ALL) NOPASSWD:ALL
groups: sudo, docker
## Set the default shell of the user to bash
shell: /bin/bash
## Enable DHCP on the default network interface provisioned in the VM network:
version: 2 ethernets: ens192:
dhcp4: true
Paste your public key contents into the line shown, preserving the dash and the indentation. Save
and exit this file.
ssh-authorized-keys:
- YOUR-PUBLIC-SSH-KEY-GOES-HERE
7. Encode the cloud-init file into single-line base64 format:
cat cloud-init.yml | base64 -w 0
8. Copy the output of the previous command to a text file. You will use this code later.
Preparing and deploying VM object files
1. Get the network name that the system created while enabling Tanzu early in the setup for vCenter:
kubectl get network
2. On your local machine, create a VM deployment file named deploy-centos-vm.yml from the following code:
deploy-centos-vm.yml
apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachine
metadata:
name: centos-vm-dev
namespace: demo1
labels:
env: dev
spec:
networkInterfaces:
- networkName: "network-1" ### uncomment if your are NOT using NSX-T load balancing
networkType: vsphere-distributed ### uncomment if your are NOT using NSX-T load balancing
# - networkType: nsx-t
className: best-effort-small
imageName: centos-stream-8-vmservice-v1alpha1-1619529007339 ### available at time of writing
powerState: poweredOn
# storageClass: vsan-default-storage-policy ### this is the default storage class
storageClass: ADSS-low ### this is a custom class we created, low Qos, 100Gb
capacity
vmMetadata:
configMapName: centos-vm-cm-dev
transport: OvfEnv
---
apiVersion: v1
kind: ConfigMap
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 16
17. metadata:
name: centos-vm-cm-dev
namespace: demo1
data:
user-data: |
YOUR-CLOUD-INIT-ENCODED-SINGLE-LINE-CONTENT-HERE-KEEP-INDENTATION hostname: centos-vm
3. Open the text file where you put the encoded user data string from earlier, and copy the string. In the deploy-centos-vm.yml file you
just created, replace the relevant part of the final line with the string you copied.
4. Save the deployment file, and exit.
5. Apply the VM deployment file:
kubectl apply -f deploy-centos-vm.yml
Please note that the VM service places the VMs into the first-tier namespace and not inside a Tanzu Kubernetes Cluster (tkc). In other
words, the VMs will be one level above any tkc you create. VMs will not "live" in a tkc, they will "live" in the namespace that created the
tkc, not inside the tkc.
6. Create a virtualmachineservice file for ssh access to the VM by establishing port 22 ingress and egress through a Tanzu virtual machine
load balancer service. This will allow any VMs with the 'env=dev' label to use this network service. First, create a file from the following
code, and name the file vm-ssh-svc.yml
vm-ssh-svc.yml
apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachineService
metadata:
name: vm-ssh-svc
namespace: demo1
spec:
selector:
env: dev
type: LoadBalancer
ports:
- name: ssh
port: 22
protocol: TCP
targetPort: 22
7. Apply the virtual machine service for ssh access:
kubectl apply -f vm-ssh-svc.yml
Validating self-service VM creation capability
1. Verify that the VM exists at the CLI.
kubectl get vm
2. Verify that the VM exists within vSphere.
3. Verify that the configmap object you created successfully deployed:
kubectl get configmap
4. Find the load balancer external IP for ssh access:
kubectl get svc
5. Using ssh, log into the VM:
ssh centos@100.80.27.162
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 17
18. 6. Verify that the system successfully executed the cloud-init items:
[centos@centos-vm-dev ~]$ cat /etc/group
....
docker:x:1000:centos
sudo:x:1001:centos
centos:x:1002:
cloud-user:x:1003:
7. Using yum, install a simple application to verify external internet connectivity:
sudo yum install telnet
Communicating between VM and Tanzu Kubernetes cluster
Accessing the tkc webserver workload from the VM
1. If you are not already logged into the namespace you created, log in now:
kubectl vsphere login -u demouser@vsphere.local --server=100.80.28.241 --insecure-skip-tls-verify
2. Set the context to the demo1 context that you created earlier:
kubectl config use-context demo1
3. Get the external IP of the VM:
kubectl get svc
4. Log into the VM:
ssh centos@100.80.27.162
5. To demonstrate communication between the VM and the tkc application, cURL the webserver pod homepage.
[centos@centos-vm-dev ~]$ curl 100.80.27.164
Accessing the VM from the containerized workload in tkc
After validating communication from the CentOS VM to the tkc cluster, we connected to the tkc cluster and validated that it was possible
to ping and log into the VM we previously created via SSH. In a production environment, this connectivity might be used to connect to a
database from a web application cluster hosted in tkc.
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 18
19. Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
DISCLAIMER OF WARRANTIES; LIMITATION OF LIABILITY:
Principled Technologies, Inc. has made reasonable efforts to ensure the accuracy and validity of its testing, however, Principled Technologies, Inc. specifically disclaims
any warranty, expressed or implied, relating to the test results and analysis, their accuracy, completeness or quality, including any implied warranty of fitness for any
particular purpose. All persons or entities relying on the results of any testing do so at their own risk, and agree that Principled Technologies, Inc., its employees and its
subcontractors shall have no liability whatsoever from any claim of loss or damage on account of any alleged error or defect in any testing procedure or result.
In no event shall Principled Technologies, Inc. be liable for indirect, special, incidental, or consequential damages in connection with its testing, even if advised of
the possibility of such damages. In no event shall Principled Technologies, Inc.’s liability, including for direct damages, exceed the amounts paid in connection with
Principled Technologies, Inc.’s testing. Customer’s sole and exclusive remedies are as set forth herein.
This project was commissioned by Dell Technologies.
Principled
Technologies®
Facts matter.®
Principled
Technologies®
Facts matter.®
Cleaning up
1. In the upper-tier namespace where you created the VM (the demo1 context and namespace), to remove the VM objects, issue the
following commands:
kubectl get vm
kubectl delete vm XXXX
kubectl get configmap
kubectl delete configmap XXXX
kubectl get VirtualMachineService
kubectl delete VirtualMachineService XXXX
2. Remove the tkc objects:
kubectl delete tkc tkc1
Give DevOps teams self-service resource pools within your private infrastructure
with Dell Technologies APEX cloud and storage solutions
January 2022 | 19