Support your modern distributed microservices applications using VMware Tanzu Service Mesh on servers enabled by 3rd Generation Intel Xeon Scalable processors
Support your modern distributed microservices applications using VMware Tanzu Service Mesh
If your organization uses a microservices Kubernetes architecture, a service mesh is a valuable tool for coordinating communication and security among services. In our testing, we deployed a microservices application, distributed over two Kubernetes clusters with secure inter-cluster communications for the services. We found that using VMware TSM to carry out this task reduced the amount of time necessary by 74 percent
compared to using only Istio. In performance testing of the TSM environment, the TCP bypass optimization reduced request duration by as much as 11.4 percent and the Intel multi-buffer cryptography optimization for Intel 3rd Generation Xeon Scalable processors reduced request duration by up to 47.1 percent while nearly doubling performance.
Service meshes are relatively new, extremely powerful and can be complex. There’s a lot of information out there on what a service mesh is and what it can do, but it’s a lot to sort through. Sometimes, it’s helpful to have a guide. If you’ve been asking questions like “What is a service mesh?” “Why would I use one?” “What benefits can it provide?” or “How did people even come up with the idea for service mesh?” then The Complete Guide to Service Mesh is for you.
Scenarios in Which Kubernetes is Used for Container Orchestration of a Web Ap...Sun Technologies
Kubernetes is commonly used for container orchestration of web applications in various scenarios where scalability, reliability, and efficient management of containerized workloads are required. Here are some scenarios where Kubernetes is used for container orchestration of web applications:
Dynamic Chunks Distribution Scheme for Multiservice Load Balancing Using Fibo...Editor IJCATR
Cloud computing is collection of distributed hosts which allows services on demand to user. The Centralized cloud based
multimedia system CMS[4], materialized because huge number of users demand for various multimedia services through the Internet
at the same time and it is hard to design effective load balancing algorithm. Load Balancing is the process which are used to distribute
workloads across aggregate computing resources that maximize throughput, minimize latency. In this paper videos are split up into no
of chunks and stored at hosts in a distributed manner, The chunk size increased to reduce time lag and improve performance. The
cluster heads will monitor all the distribution host loads and client request which could not allow the direct communication between
Client and host .Fibonacci-based breaking scheme is introduced to split a video file into number of chunks that allows to reduce the
provisioning delay received by users and to optimize the resource utilization by reducing the idle time. The proposed scheme will able
to view the whole video by the end user without any delay.
Microservices design principles establish some standard practices for planning, developing, and implementing a distributed architecture for your application. Read about some of the most common characteristics of design principles, its examples, and implementations carried out by various companies worldwide.
[APIdays Paris 2019] API Management in Service Mesh Using Istio and WSO2 API ...WSO2
Stefano discusses how to augment service mesh functionality with API management capabilities, so you can create an end-to-end solution for your entire business functionality — from microservices, to APIs, to end-user applications.
Service meshes are relatively new, extremely powerful and can be complex. There’s a lot of information out there on what a service mesh is and what it can do, but it’s a lot to sort through. Sometimes, it’s helpful to have a guide. If you’ve been asking questions like “What is a service mesh?” “Why would I use one?” “What benefits can it provide?” or “How did people even come up with the idea for service mesh?” then The Complete Guide to Service Mesh is for you.
Scenarios in Which Kubernetes is Used for Container Orchestration of a Web Ap...Sun Technologies
Kubernetes is commonly used for container orchestration of web applications in various scenarios where scalability, reliability, and efficient management of containerized workloads are required. Here are some scenarios where Kubernetes is used for container orchestration of web applications:
Dynamic Chunks Distribution Scheme for Multiservice Load Balancing Using Fibo...Editor IJCATR
Cloud computing is collection of distributed hosts which allows services on demand to user. The Centralized cloud based
multimedia system CMS[4], materialized because huge number of users demand for various multimedia services through the Internet
at the same time and it is hard to design effective load balancing algorithm. Load Balancing is the process which are used to distribute
workloads across aggregate computing resources that maximize throughput, minimize latency. In this paper videos are split up into no
of chunks and stored at hosts in a distributed manner, The chunk size increased to reduce time lag and improve performance. The
cluster heads will monitor all the distribution host loads and client request which could not allow the direct communication between
Client and host .Fibonacci-based breaking scheme is introduced to split a video file into number of chunks that allows to reduce the
provisioning delay received by users and to optimize the resource utilization by reducing the idle time. The proposed scheme will able
to view the whole video by the end user without any delay.
Microservices design principles establish some standard practices for planning, developing, and implementing a distributed architecture for your application. Read about some of the most common characteristics of design principles, its examples, and implementations carried out by various companies worldwide.
[APIdays Paris 2019] API Management in Service Mesh Using Istio and WSO2 API ...WSO2
Stefano discusses how to augment service mesh functionality with API management capabilities, so you can create an end-to-end solution for your entire business functionality — from microservices, to APIs, to end-user applications.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure
channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain
a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel
cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and
analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has
presented using Petrinet production model. We present the designed SCTP petrinet models and its
analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect
against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization,
security and intruder prevention.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure
channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain
a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel
cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and
analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has
presented using Petrinet production model. We present the designed SCTP petrinet models and its
analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect
against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization,
security and intruder prevention.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional password generations System to achieve strong authentication. SCTP was projected to develop multilevel
cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has presented using Petrinet production model. We present the designed SCTP petrinet models and its analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect against different attack mentioned in literature. This paper depicts the SCTP performance analysis report which compares with existing techniques that are proposed to achieve authentication, authorization, security and intruder prevention.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has presented using Petrinet production model. We present the designed SCTP petrinet models and its analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization, security and intruder prevention.
Profit Maximization for Service Providers using Hybrid Pricing in Cloud Compu...Editor IJCATR
Cloud computing has recently emerged as one of the buzzwords in the IT industry. Several IT vendors are promising to offer computation, data/storage, and application hosting services, offering Service-Level Agreements (SLA) backed performance and uptime promises for their services. While these „clouds‟ are the natural evolution of traditional clusters and data centers, they are distinguished by following a pricing model where customers are charged based on their utilization of computational resources, storage and transfer of data. They offer subscription-based access to infrastructure, platforms, and applications that are popularly termed as IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service). In order to improve the profit of service providers we implement a technique called hybrid pricing , where this hybrid pricing model is a pooled with fixed and spot pricing techniques.
Implementation of the Open Source Virtualization Technologies in Cloud Computingneirew J
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
Implementation of the Open Source Virtualization Technologies in Cloud Computingijccsa
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
The infrastructure cloud (IaaS) service model offers improved resource flexibility and availability, where tenants – insulated from the minutiae of hardware maintenance – rent computing resources to deploy and operate complex systems. Large-scale services running on IaaS platforms demonstrate the viability of this model; nevertheless, many organizations operating on sensitive data avoid migrating operations to IaaS platforms due to security concerns. In this paper, we describe a framework for data and operation security in IaaS, consisting of protocols for a trusted launch of virtual machines and domain-based storage protection. We continue with an extensive theoretical analysis with proofs about protocol resistance against attacks in the defined threat model. The protocols allow trust to be established by remotely attesting host platform configuration prior to launching guest virtual machines and ensure confidentiality of data in remote storage, with encryption keys maintained outside of the IaaS domain. Presented experimental results demonstrate the validity and efficiency of the proposed protocols. The framework prototype was implemented on a test bed operating a public electronic health record system, showing that the proposed protocols can be integrated into existing cloud environments.
Key-exposure resistance has always been an important issue for in-depth cyber defence in many security applications. Recently, how to deal with the key exposure problem in the settings of cloud storage auditing has been proposed and studied. To address the challenge, existing solutions all require the client to update his secret keys in every time period, which may inevitably bring in new local burdens to the client, especially those with limited computation resources such as mobile phones. In this paper, we focus on how to make the key updates as transparent as possible for the client and propose a new paradigm called cloud storage auditing with verifiable outsourcing of key updates. In this paradigm, key updates can be safely outsourced to some authorized party, and thus the key-update burden on the client will be kept minimal. Specifically, we leverage the third party auditor (TPA) in many existing public auditing designs, let it play the role of authorized party in our case, and make it in charge of both the storage auditing and the secure key updates for key-exposure resistance. In our design, TPA only needs to hold an encrypted version of the client’s secret key, while doing all these burdensome tasks on behalf of the client. The client only needs to download the encrypted secret key from the TPA when uploading new files to cloud. Besides, our design also equips the client with capability to further verify the validity of the encrypted secret keys provided by TPA. All these salient features are carefully designed to make the whole auditing procedure with key exposure resistance as transparent as possible for the client. We formalize the definition and the security model of this paradigm. The security proof and the performance simulation show that our detailed design instantiations are secure and efficient.
The Importance of Testnets in Developing InitVerse dApps.pdfInitVerse Blockchain
InitVerse is a blockchain platform that aims to revolutionize governance models through experimental testnets. By exploring the efficacy of different governance models, InitVerse seeks to uncover valuable insights and lessons that can be applied to real-world scenarios. This article will delve into the importance of trialing InitVerse governance models on experimental testnets and highlight the lessons learned from these implementations.
Exploring the Efficacy of InitVerse Governance Models
Governance models play a crucial role in the success of blockchain platforms, as they determine how decisions are made and protocols are updated. However, finding the optimal governance model can be challenging due to the complex and decentralized nature of blockchain networks. InitVerse recognizes this challenge and is dedicated to trialing various governance models on experimental testnets.
Experimental testnets allow for the exploration and evaluation of different governance models in a controlled environment. By simulating real-world scenarios, InitVerse can assess the effectiveness of various models and identify their strengths and weaknesses. This approach ensures that any potential flaws or vulnerabilities are identified and addressed before implementing the governance models on the mainnet.
Through these trials, InitVerse can gather valuable data and user feedback that can inform the decision-making process. By involving the community in the experimentation, InitVerse fosters a collaborative environment where users can actively participate in shaping the platform’s governance models. This inclusive approach ensures that the governance models reflect the needs and preferences of the stakeholders, ultimately enhancing the platform’s long-term sustainability and resilience.
Lessons Learned from Implementing Experimental Testnets
The implementation of experimental testnets has yielded significant lessons that have shaped the development of InitVerse’s governance models. One crucial lesson is the importance of transparency and inclusivity in decision-making. By allowing users to participate in the governance process, InitVerse ensures that decisions are made collectively, promoting consensus and minimizing potential conflicts.
Another key lesson learned is the need for flexibility and adaptability. Blockchain networks are constantly evolving, and governance models must be able to accommodate changes and upgrades. Through the experimental testnets, InitVerse has identified the necessity of modular governance structures that can be easily modified and upgraded to meet the evolving needs of the platform and its users.
This presentation was delivered at the MQTC 2017 conference in Ohio. It covers different concepts and features of MQ you need to consider when moving your IBM MQ infrastructure into the cloud.
Comparison of Current Service Mesh ArchitecturesMirantis
Learn the differences between Envoy, Istio, Conduit, Linkerd and other service meshes and their components. Watch the recording including demo at: https://info.mirantis.com/service-mesh-webinar
In Cloud computing, application and desktop delivery are the two emerging technologies that has reduced
application and desktop computing costs and provided greater IT and user flexibility compared to
traditional application and desktop management models. Among the various SaaS technologies, XenApp
that allow numerous end users to connect to their corporate applications from any device. XenApp enables
organizations to improve application management by centralizing applications in the datacenter to reduce
costs, controlling and encrypting access to data and applications to improve security and delivering
applications instantly to users anywhere by remotely accessing or streaming. As per old architecture we
were using the oracle or MS-SQL in the backend of XenApp on Windows Server itself. But as we know for
this we have to pay for database especially because we are not using it for any other purpose. And as we
know windows is GUI based so running database over it consumes much more resources. So, it can lead to
single point of failure. For this, this paper proposes a scheme for removing this problem by using the HA
failover cluster based SQL server (mysql or Oracle) which will run over Linux box having concept of VIP
(Virtual IP).
The main problem is to avoid the complexity of retrieving the video content without streaming problem in multi network clients. The proposed work is to improve Collaboration among streaming contents on server resources in order to improve the network performance. Implementing network collaboration on a content delivery scenario, with a strong reduction of data transferred via servers. Audio and video files are transmitted in blocks to clients through the peer using the Network Coding Equivalent Content Distribution scheme. The objective of the system is to tolerate out-of-order arrival of blocks in the stream and is resilient to transmission losses of an arbitrary number of intermediate blocks, without affecting the verifiability of remaining blocks in the stream. To formulate the joint rate control and packet scheduling problem as an integer program where the objective is to minimize a cost function of the expected video distortion. Suggestions of cost functions are proposed in order to provide service differentiation and address fairness among users.
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
More Related Content
Similar to Support your modern distributed microservices applications using VMware Tanzu Service Mesh on servers enabled by 3rd Generation Intel Xeon Scalable processors
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure
channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain
a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel
cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and
analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has
presented using Petrinet production model. We present the designed SCTP petrinet models and its
analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect
against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization,
security and intruder prevention.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure
channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain
a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel
cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and
analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has
presented using Petrinet production model. We present the designed SCTP petrinet models and its
analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect
against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization,
security and intruder prevention.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional password generations System to achieve strong authentication. SCTP was projected to develop multilevel
cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has presented using Petrinet production model. We present the designed SCTP petrinet models and its analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect against different attack mentioned in literature. This paper depicts the SCTP performance analysis report which compares with existing techniques that are proposed to achieve authentication, authorization, security and intruder prevention.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has presented using Petrinet production model. We present the designed SCTP petrinet models and its analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization, security and intruder prevention.
Profit Maximization for Service Providers using Hybrid Pricing in Cloud Compu...Editor IJCATR
Cloud computing has recently emerged as one of the buzzwords in the IT industry. Several IT vendors are promising to offer computation, data/storage, and application hosting services, offering Service-Level Agreements (SLA) backed performance and uptime promises for their services. While these „clouds‟ are the natural evolution of traditional clusters and data centers, they are distinguished by following a pricing model where customers are charged based on their utilization of computational resources, storage and transfer of data. They offer subscription-based access to infrastructure, platforms, and applications that are popularly termed as IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service). In order to improve the profit of service providers we implement a technique called hybrid pricing , where this hybrid pricing model is a pooled with fixed and spot pricing techniques.
Implementation of the Open Source Virtualization Technologies in Cloud Computingneirew J
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
Implementation of the Open Source Virtualization Technologies in Cloud Computingijccsa
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
The infrastructure cloud (IaaS) service model offers improved resource flexibility and availability, where tenants – insulated from the minutiae of hardware maintenance – rent computing resources to deploy and operate complex systems. Large-scale services running on IaaS platforms demonstrate the viability of this model; nevertheless, many organizations operating on sensitive data avoid migrating operations to IaaS platforms due to security concerns. In this paper, we describe a framework for data and operation security in IaaS, consisting of protocols for a trusted launch of virtual machines and domain-based storage protection. We continue with an extensive theoretical analysis with proofs about protocol resistance against attacks in the defined threat model. The protocols allow trust to be established by remotely attesting host platform configuration prior to launching guest virtual machines and ensure confidentiality of data in remote storage, with encryption keys maintained outside of the IaaS domain. Presented experimental results demonstrate the validity and efficiency of the proposed protocols. The framework prototype was implemented on a test bed operating a public electronic health record system, showing that the proposed protocols can be integrated into existing cloud environments.
Key-exposure resistance has always been an important issue for in-depth cyber defence in many security applications. Recently, how to deal with the key exposure problem in the settings of cloud storage auditing has been proposed and studied. To address the challenge, existing solutions all require the client to update his secret keys in every time period, which may inevitably bring in new local burdens to the client, especially those with limited computation resources such as mobile phones. In this paper, we focus on how to make the key updates as transparent as possible for the client and propose a new paradigm called cloud storage auditing with verifiable outsourcing of key updates. In this paradigm, key updates can be safely outsourced to some authorized party, and thus the key-update burden on the client will be kept minimal. Specifically, we leverage the third party auditor (TPA) in many existing public auditing designs, let it play the role of authorized party in our case, and make it in charge of both the storage auditing and the secure key updates for key-exposure resistance. In our design, TPA only needs to hold an encrypted version of the client’s secret key, while doing all these burdensome tasks on behalf of the client. The client only needs to download the encrypted secret key from the TPA when uploading new files to cloud. Besides, our design also equips the client with capability to further verify the validity of the encrypted secret keys provided by TPA. All these salient features are carefully designed to make the whole auditing procedure with key exposure resistance as transparent as possible for the client. We formalize the definition and the security model of this paradigm. The security proof and the performance simulation show that our detailed design instantiations are secure and efficient.
The Importance of Testnets in Developing InitVerse dApps.pdfInitVerse Blockchain
InitVerse is a blockchain platform that aims to revolutionize governance models through experimental testnets. By exploring the efficacy of different governance models, InitVerse seeks to uncover valuable insights and lessons that can be applied to real-world scenarios. This article will delve into the importance of trialing InitVerse governance models on experimental testnets and highlight the lessons learned from these implementations.
Exploring the Efficacy of InitVerse Governance Models
Governance models play a crucial role in the success of blockchain platforms, as they determine how decisions are made and protocols are updated. However, finding the optimal governance model can be challenging due to the complex and decentralized nature of blockchain networks. InitVerse recognizes this challenge and is dedicated to trialing various governance models on experimental testnets.
Experimental testnets allow for the exploration and evaluation of different governance models in a controlled environment. By simulating real-world scenarios, InitVerse can assess the effectiveness of various models and identify their strengths and weaknesses. This approach ensures that any potential flaws or vulnerabilities are identified and addressed before implementing the governance models on the mainnet.
Through these trials, InitVerse can gather valuable data and user feedback that can inform the decision-making process. By involving the community in the experimentation, InitVerse fosters a collaborative environment where users can actively participate in shaping the platform’s governance models. This inclusive approach ensures that the governance models reflect the needs and preferences of the stakeholders, ultimately enhancing the platform’s long-term sustainability and resilience.
Lessons Learned from Implementing Experimental Testnets
The implementation of experimental testnets has yielded significant lessons that have shaped the development of InitVerse’s governance models. One crucial lesson is the importance of transparency and inclusivity in decision-making. By allowing users to participate in the governance process, InitVerse ensures that decisions are made collectively, promoting consensus and minimizing potential conflicts.
Another key lesson learned is the need for flexibility and adaptability. Blockchain networks are constantly evolving, and governance models must be able to accommodate changes and upgrades. Through the experimental testnets, InitVerse has identified the necessity of modular governance structures that can be easily modified and upgraded to meet the evolving needs of the platform and its users.
This presentation was delivered at the MQTC 2017 conference in Ohio. It covers different concepts and features of MQ you need to consider when moving your IBM MQ infrastructure into the cloud.
Comparison of Current Service Mesh ArchitecturesMirantis
Learn the differences between Envoy, Istio, Conduit, Linkerd and other service meshes and their components. Watch the recording including demo at: https://info.mirantis.com/service-mesh-webinar
In Cloud computing, application and desktop delivery are the two emerging technologies that has reduced
application and desktop computing costs and provided greater IT and user flexibility compared to
traditional application and desktop management models. Among the various SaaS technologies, XenApp
that allow numerous end users to connect to their corporate applications from any device. XenApp enables
organizations to improve application management by centralizing applications in the datacenter to reduce
costs, controlling and encrypting access to data and applications to improve security and delivering
applications instantly to users anywhere by remotely accessing or streaming. As per old architecture we
were using the oracle or MS-SQL in the backend of XenApp on Windows Server itself. But as we know for
this we have to pay for database especially because we are not using it for any other purpose. And as we
know windows is GUI based so running database over it consumes much more resources. So, it can lead to
single point of failure. For this, this paper proposes a scheme for removing this problem by using the HA
failover cluster based SQL server (mysql or Oracle) which will run over Linux box having concept of VIP
(Virtual IP).
The main problem is to avoid the complexity of retrieving the video content without streaming problem in multi network clients. The proposed work is to improve Collaboration among streaming contents on server resources in order to improve the network performance. Implementing network collaboration on a content delivery scenario, with a strong reduction of data transferred via servers. Audio and video files are transmitted in blocks to clients through the peer using the Network Coding Equivalent Content Distribution scheme. The objective of the system is to tolerate out-of-order arrival of blocks in the stream and is resilient to transmission losses of an arbitrary number of intermediate blocks, without affecting the verifiability of remaining blocks in the stream. To formulate the joint rate control and packet scheduling problem as an integer program where the objective is to minimize a cost function of the expected video distortion. Suggestions of cost functions are proposed in order to provide service differentiation and address fairness among users.
Similar to Support your modern distributed microservices applications using VMware Tanzu Service Mesh on servers enabled by 3rd Generation Intel Xeon Scalable processors (20)
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
Get in and stay in the productivity zone with the HP Z2 G9 Tower WorkstationPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to Dell Precision 3660 and 5860 tower workstations in optimized performance modes
Conclusion
HP Z2 G9 Tower Workstation users can change the BIOS settings to dial in the performance mode that best suits their needs: High Performance Mode, Performance Mode, or Quiet Mode. In good
news for both creative and technical professionals, we found that an Intel Core i9-13900 processor-powered HP Z2 G9 Tower Workstation set to High Performance mode received higher CPU-based benchmark scores than both a similarly configured Dell Precision 3660 and a Dell Precision 5860 equipped with an Intel Xeon w5-2455x processor. Plus, the HP Z2 G9 Tower Workstation was quieter while running CPU-intensive Cinebench 2024 and SPECapc for Solidworks 2022 workloads than both Dell Precision tower workstations. This means HP Z2 G9 Tower Workstation users who prize performance over everything else can do so without sacrificing a quiet workspace.
Open up new possibilities with higher transactional database performance from...Principled Technologies
In our PostgreSQL tests, R7i instances boosted performance over R6i instances with previous-gen processors
If you use the open-source PostgreSQL database to run your critical business operations, you have many cloud options from which to choose. While many of these instances can do the job, some can deliver stronger performance, which can mean getting a greater return on your cloud investment.
We conducted hands-on testing with the HammerDB TPROC-C benchmark to see how the PostgreSQL performance of Amazon EC2 R7i instances, enabled by 4th Gen Intel Xeon Scalable processors, stacked up to that of R6i instances with previous-generation processors. We learned that small, medium-sized, and large R7i instances with the newer processors delivered better OLTP performance, with improvements as high as 13.8 percent. By choosing the R7i instances, your organization has the potential to support more users, deliver a better experience to those users, and even lower your cloud operating expenditures by requiring fewer instances to get the job done.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Support your modern distributed microservices applications using VMware Tanzu Service Mesh on servers enabled by 3rd Generation Intel Xeon Scalable processors
1. Support your modern distributed microservices
applications using VMware Tanzu Service Mesh on servers
enabled by 3rd
Generation Intel Xeon Scalable processors
VMware TSM offers automation, workflows, and technologies that optimize service mesh
operations and performance
Software developers are increasingly using microservices Kubernetes architectures, where applications comprise
a variety of independent services. A service mesh is a platform that coordinates communication and security
between these services.1
We conducted hands-on testing to explore how the automation, workflows, and technologies in VMware®
Tanzu
Service Mesh™
(TSM) optimize service mesh operations and performance. We first deployed a microservices
application, distributed over two Kubernetes clusters, with secure inter-cluster communications for the services.
We did this twice, once using TSM and once using only Istio. (Note that TSM deploys a version of Istio onto
the Kubernetes client cluster, where application workloads run.) We also tested two performance-optimization
strategies: a TLS optimization scheme that uses features provided by 3rd
Generation Intel®
Xeon®
Scalable
processors and a new TCP bypass scheme.2,3,4
We measured the TCP performance improvements between
services in a TSM deployment by comparing web service performance with and without these optimizations.
We found that using VMware Tanzu Service Mesh to deploy our microservices service mesh environment
required much less time and many fewer steps than using the native Istio distribution. We also found that for
communications between pods running on the same node in the TSM environment, using the TLS handshake
acceleration lowered request duration by up to 47.1 percent while nearly doubling performance in terms of
queries per second and using this TCP bypass optimization lowered request duration by up to 11.4 percent.
Up to
11.4% lower latency
with TCP bypass vs. without
the optimization
74% less time
33% fewer steps
To deploy a service mesh
environment using VMware
TSM vs. using only Istio
1.9x the performance
Up to 47.1% lower latency
with Intel multi-buffer cryptography
on 3rd
Generation Xeon Scalable
processors vs. without the optimization
Support your modern distributed microservices applications using VMware Tanzu Service Mesh
on servers enabled by 3rd
Generation Intel Xeon Scalable processors
December 2022
A Principled Technologies report: Hands-on testing. Real-world results.
2. Kubernetes microservices and the
need for service mesh
Across a wide range of industries, companies are
developing software using a Kubernetes microservices
architecture, with multiple independent services making
up applications.5
In this building-block approach, a team
developing an ecommerce application could use one
web server with a back-end database, but that application
could comprise services such as credit card verification,
product lists, inventory, advertisements, shopping
cart, newsletter signup, and so on. This approach
gives developers flexibility, letting them use different
programming languages, frameworks, and databases
for these services, and makes it easier to test new
components. A microservices approach also introduces
complexity, however; communications between the
services must be secure, and the encryption that is
necessary for security can increase latency.
To address these concerns, many environments use a
service mesh, which VMware describes as “a modern
connectivity and security run-time platform” that handles
service-to-service communication and security, monitoring
distributed tracing, and resiliency.6
About VMware Tanzu
Service Mesh
According to VMware, Tanzu Service Mesh
“deploys a curated version of Istio [open
source service mesh]”7
and “elevates
it to more of a distributed application
framework that extends far beyond
service-to-service communications and
provides advanced security capabilities,
resiliency, and automated operations for
the application — regardless of which
clouds its services are running on.”8
Tanzu Service Mesh offers “advanced,
end-to-end connectivity, security, and
insights for modern applications—across
application end-users, microservices,
APIs, and data—enabling compliance with
Service Level Objectives (SLOs) and data
protection and privacy regulations.”9
Learn more at https://tanzu.vmware.
com/service-mesh.
How do VMware Tanzu Service Mesh and Istio relate?
When you onboard a client Kubernetes cluster into the Tanzu Service Mesh SaaS solution, it deploys
a managed and curated version of Istio onto the Kubernetes client cluster, where application
workloads run.10
According to VMware, “The Tanzu Service Mesh Global Controller uses Istio for certain local control
capabilities while also managing the life cycle of that Istio deployment. Customers can choose to utilize
this Istio deployment directly or utilize Tanzu Service Mesh’s application programming interface (API)
and create a global namespace… which provides automated Istio operations and adds additional
layers of policy.”11
VMware states that the TSM global namespace offers the following advanced zero-trust security and
compliance capabilities, which are unique to TSM: end-to-end mTLS encryption from service to service
without regard to cluster, site, or cloud; access policies for micro-segmentation at the application level;
API control and segmentation; PII tracking and data leakage protection for personally identifiable
information; east-west threat detection.12
Support your modern distributed microservices applications using VMware Tanzu Service Mesh
on servers enabled by 3rd
Generation Intel Xeon Scalable processors
December 2022 | 2
3. Cluster 2
Cluster 1
AVI
User
User traffic
Containers
Microservices
Services traffic
Service mesh
Overview of our testing
To explore the ways that the automation and workflows in VMware Tanzu Service Mesh simplify service mesh
operations, we deployed one microservices application over two bare-metal Kubernetes clusters with secure
inter-cluster communications via Mutual Transport Layer Security (mTLS). We began with two separate bare-metal
Kubernetes clusters of four nodes each, using eight Dell™
PowerEdge™
R650 servers powered by 3rd
Generation
Intel Xeon Scalable processors. (See Figure 1.)
We recorded the number of steps and the amount of time required to deploy the service mesh on each
cluster, connect the two clusters, deploy the application, and configure secure, mutually trusted, encrypted
communication between services running on different clusters. We did this twice, once using TSM and once
using Istio alone.
To explore the ways that the technologies developed for VMware Tanzu Service Mesh and Istio can improve
service mesh performance, we used the TSM environment and the Fortio and k6 load testing tools. We
conducted tests to determine whether using the Intel multi-buffer cryptography feature for TLS communications
in a service mesh on servers with Intel 3rd
Generation Scalable processors could improve performance. To do
so, we simplified the mesh and the application, redeploying TSM on only one four-node cluster and creating
encrypted web traffic between two services: a k6 web client and a Fortio web server. Second, we used a similar
approach to measure the performance improvement that a TCP bypass optimization could provide, though we
used a Fortio web client to generate load on the server.
For complete details on both our hardware configurations and test procedures, see the
science behind the report.
Figure 1: Diagram of our test environment. Source: Principled Technologies.
Support your modern distributed microservices applications using VMware Tanzu Service Mesh
on servers enabled by 3rd
Generation Intel Xeon Scalable processors
December 2022 | 3
4. Time necessary to install service mesh under best-case scenario
mm:ss | Lower is better
Using VMware TSM
06:15
24:27
Using only Istio
Figure 2: Time an engineer needed to install service mesh when working with detailed instructions. Less time is better.
Source: Principled Technologies.
How did the automation and workflows in TSM reduce the time to create a multi-cluster
service mesh and deploy a microservices application on it?
To quantify the effort- and time-saving value of the automation and workflows in TSM, we selected two in-
house engineers and had them record the time they needed to create a multi-cluster service mesh and deploy
a microservices application on it. They performed this scenario twice, once using TSM and once using only Istio.
The engineers used a demo application from Google that implements an Online Boutique in microservices.13
They used a standard starting point of two pre-existing Kubernetes clusters with an AVI load balancer configured
and operating within the two clusters, and recorded the time and steps they needed to deploy a microservices
application on each service mesh spanning two clusters with secure mTLS communication between all services.
Engineer 1 began the deployment familiar with the concepts, but with no personal experience using either TSM
or Istio. As he worked and resolved issues, he took detailed notes and created step-by-step instructions that
Engineer 2 needed only to execute. This approach gave us insight into both typical and best-case scenarios for
deployment speed using TSM and using only Istio.
Engineer 1 needed roughly 30 minutes to execute the scenario using VMware Tanzu Service Mesh and roughly 3
hours to do so using only Istio. These approximations include the necessary and realistic time Engineer 1 spent
performing first-time research. When Engineer 2 used the detailed instructions that Engineer 1 had created, he
needed 6 minutes and 15 seconds to complete the scenario with VMware Tanzu Service Mesh and 24 minutes
and 27 seconds to do so using only Istio. Using VMware TSM reduced the time by 74 percent. Figure 2 shows
the time that Engineer 2 required, representing a best-case scenario where the engineer doing the work has
detailed instructions.
Support your modern distributed microservices applications using VMware Tanzu Service Mesh
on servers enabled by 3rd
Generation Intel Xeon Scalable processors
December 2022 | 4
5. To understand why completing the exercise using VMware TSM saved so much time, let’s look at the steps
involved. In this section, we present a high-level overview of the time and steps involved in carrying out our
scenario with and without TSM, starting with a fresh bare-metal Kubernetes deployment with two clusters. (Note,
in the science behind the report, we provide the detailed step-by-step directions that Engineer 1 prepared
for Engineer 2.)
Table 1 presents the five basic tasks our engineers completed when executing our test scenario using only Istio.
It states the number of steps and amount of time each step required for Engineer 2, who followed the detailed
instructions that Engineer 1 had prepared.
Table 1: The tasks, number of steps, and amount of time involved in executing our multicluster test scenario using only Istio. Source:
Principled Technologies.
Tasks using only Istio
Number of
steps
Time in mm:ss
Install Istio mesh on the clusters and deploy the Online Boutique application 23 10:03
Install Multi-Primary on different networks 13 10:48
Make application-specific modifications to Istio’s default settings 1 02:31
Deploy the Online Boutique application to the mesh 6 00:45
Verify the Online Boutique application works in the two-cluster Istio service mesh 2 00:20
Total 45 24:27
Table 2 presents the four basic tasks our engineers completed when executing our test scenario using TSM.
Note that it was not necessary to make any application-specific modifications to the TSM default settings. Like
Table 1, this table shows the time that Engineer 2 required to follow the detailed instructions that Engineer
1 had prepared.
Table 2: The tasks, number of steps, and amount of time involved in executing our multicluster test scenario using TSM. Source:
Principled Technologies.
Tasks using VMware TSM
Number of
steps
Time in mm:ss
Install TSM 15 04:36
Create the Global Namespace 8 00:34
Deploy the Online Boutique application to the mesh 5 00:45
Verify the Online Boutique application works in TSM with two-clusters 2 00:20
Total 30 06:15
Why did VMware TSM save so much time?
Based on the number of steps alone, one might assume that using VMware TSM to complete our scenario would
have taken roughly two-thirds as long as using only Istio. In fact, as we noted above, it took just over one-fourth
the time, a savings of 74 percent. We attribute this discrepancy to the fact that when using Istio alone, our
engineers had to perform a disproportionate number of tasks that our engineers characterized as cumbersome
to execute. Many Istio steps required detailed command line work that had to be very precise. In contrast, the
TSM deployment process was largely automated. Our engineers reported that the automation and workflow
capabilities simplified many activities, making them quick and easy to execute.
Support your modern distributed microservices applications using VMware Tanzu Service Mesh
on servers enabled by 3rd
Generation Intel Xeon Scalable processors
December 2022 | 5
6. How did these acceleration technologies improve service
mesh performance?
VMware Tanzu Service Mesh, like Istio, adds sidecar proxies to seamlessly connect microservices. That general
approach permits opportunities for increasing performance with TSM or Istio in certain configurations. For
example, TSM can use mTLS to secure communications between microservices and the mesh components that
connect them. The use of TLS is ubiquitous and any speedups in TLS could benefit mesh performance.
The slowest part of the TLS algorithm is the initial stage where the two ends establish trust and exchange
cryptographic keys using the RSA or similar algorithms. Intel has written cryptographic libraries that make use
of AVX-512 operations in its 3rd
Generation Intel Xeon Scalable processors. These libraries can potentially
accelerate TLS operations in TSM. One approach to speeding TLS with AVX-512 in TSM is Intel multi-buffer
cryptography, which uses multiple buffers, processes RSA operations in a SIMD pipeline, potentially enabling
greater throughput and reduced latencies.16
In our TLS environment, we tested this approach to optimizing TLS
by having 400 simulated users send TLS-secured communications to one web server for 4 minutes. Because the
k6 user dropped the channel after it sent each message and received the reply, creating a new TLS channel for
the next message was necessary, and offered another opportunity for TLS acceleration.
A second opportunity for increasing performance arises when two microservices run on the same node; one
can decrease the number of times a network packet flows through the OS’s TCP module by having their proxies
communicate via an eBPF routine. That routine enforces the security and routing controls without having to use
all parts of the SDN machinery. We tested one such eBPF TCP bypass scheme17
to determine its performance
gains over the default TCP stack.
About 3rd
Gen Intel Xeon Scalable processors
According to Intel, 3rd
Gen Intel Xeon Scalable processors are “[o]ptimized for cloud, enterprise, HPC,
network, security, and IoT workloads with 8 to 40 powerful cores and a wide range of frequency, feature,
and power levels.”14
Their features include Intel Advanced Vector Extensions 512 (Intel AVX-512), which Intel says
“[b]oosts performance and throughput for the most demanding computational tasks in applications
such as modeling and simulation, data analytics and machine learning, data compression, visualization,
and digital content creation.”15
To learn more about the 3rd
Generation Intel Xeon Scalable processor family, visit https://www.intel.com/
content/www/us/en/products/docs/processors/xeon/3rd
-gen-xeon-scalable-processors-brief.html.
Support your modern distributed microservices applications using VMware Tanzu Service Mesh
on servers enabled by 3rd
Generation Intel Xeon Scalable processors
December 2022 | 6
7. Accelerating TSM’s RSA operations with Intel multi-buffer cryptography on Intel 3rd
Generation Scalable processors
To quantify the impact of the Intel multi-buffer cryptography optimization on RSA operations, we set up a new
four-node Kubernetes cluster and deployed TSM. To focus on the AVX-512 capabilities of these processors,
we modified the default TSM configuration slightly so that the Istio pods performing the TLS operations had
the same memory and CPU resources as those we used with the Intel multi-buffer cryptography solution. We
used the k6 load-generating tool to simulate 400 users sending small web requests to the Fortio server. We
used k6 because we wanted greater control over the client-side TLS, and we did not need to send requests at a
fixed rate. k6 delivered two metrics: request duration, or latency, and queries per second (QPS). We measured
performance with and without the Intel multi-buffer cryptography optimization for 3rd
Generation Intel Xeon
Scalable processors.
Figure 3 shows our findings for 99th
percentile latency. Using Intel multi-buffer cryptography reduced request
duration by 47.1 percent.
99th percentile request duration (latency) with and without Intel multi-buffer
cryptography enabled Milliseconds | Lower is better
With Intel multi-buffer cryptography
165
312
Without Intel multi-buffer cryptography
Figure 3: TLS acceleration test. Performance impact of enabling Intel multi-buffer cryptography on TSM using Intel 3rd
Generation Scalable
processors. Lower latency is better. Source: Principled Technologies.
Figure 4 shows our findings for performance in terms of queries per second. Using Intel multi-buffer
cryptography nearly doubled performance, achieving 1.9 times as many queries per second.
Queries per second with and without Intel multi-buffer cryptography enabled
Queries per second | Higher is better
6,339
3,304
With Intel multi-buffer cryptography
Without Intel multi-buffer cryptography
Figure 4: TLS acceleration test. Performance impact of enabling Intel multi-buffer cryptography on TSM using Intel 3rd
Generation Scalable
processors. Greater QPS is better. Source: Principled Technologies.
Support your modern distributed microservices applications using VMware Tanzu Service Mesh
on servers enabled by 3rd
Generation Intel Xeon Scalable processors
December 2022 | 7
8. Accelerating intra-node communications with a TCP bypass strategy
To quantify the impact of the TCP bypass optimizations on intranode communications, we set up a four-node
Kubernetes cluster and deployed TSM. We used a Fortio client pod to send 1,000 1KB requests per second for
various numbers of virtual users to a Fortio web server pod. The Fortio client measured the request duration, or
latency. We measured performance with and without the optimization.
Figure 5 shows our findings for 99th
percentile latency across the four virtual-user counts we tested with and
without the TCP bypass optimization. Using this optimization consistently reduced request duration, with
improvements ranging from 5.0 percent at 16 virtual users to 11.4 percent at 32 virtual users.
99th percentile request duration (latency) with and without TCP bypass enabled
Lower is better
16 32
Number of virtual users
0
4
6
2
8
10
12
14
16
18
20
64 128
With TCP bypass Without TCP bypass
2.43 2.56
3.96 4.48
7.87
8.68
15.90
17.19
Milliseconds
Figure 5: TCP data-path optimization test. Performance impact of enabling TCP bypass at various thread counts. Lower latency is better.
Source: Principled Technologies.
Additional features of VMware Tanzu Service Mesh
According to VMware, a default installation of TSM offers features that support “Deep application
visibility and actionable insights.”18
Specifically, “Tanzu Service Mesh helps teams overcome the
performance and security visibility gaps resulting from distributed microservices architectures and
adoption of multiple platforms and clouds. Operations teams have access to rich troubleshooting tools,
including multi-cloud topology maps and traffic flows, performance and health metrics, and application-
to-infrastructure correlation. [...] Troubleshooting application issues or investigating security incidents
becomes much easier—reducing mean time to identify/repair and detect/respond.”19
We did not test these features. Our default installation of Istio did not include these features.
Support your modern distributed microservices applications using VMware Tanzu Service Mesh
on servers enabled by 3rd
Generation Intel Xeon Scalable processors
December 2022 | 8
9. 1. Niran Even-Chen, Oren Penso, Sergio Pozo, and Susan Wu, “Service Mesh For Dummies, VMware 2nd Special Edition,”
accessed October 25, 2022, https://tanzu.vmware.com/content/ebooks/service-mesh-for-dummies-2022.
2. Intel multi-buffer cryptography is part of the Intel Integrated Performance Primitives Cryptography library. To learn more,
see https://github.com/intel/ipp-crypto/blob/develop/sources/ippcp/crypto_mb/Readme.md.
3. Manish Chugtu, “TLS Handshake Acceleration with Tanzu Service Mesh,” accessed September 26, 2022,
https://blogs.vmware.com/networkvirtualization/2022/08/tls-handshake-acceleration-with-tanzu-service-mesh.html.
4. Manish Chugtu, “Tanzu Service Mesh Acceleration using eBPF”, accessed September 26, 2022,
https://blogs.vmware.com/networkvirtualization/2022/08/tanzu-service-mesh-acceleration-using-ebpf.html.
5. Solo.io, “New Research Reveals Microservices, Service Mesh Critical to Modern Digital Transformation Efforts,” accessed
September 26, 2022, https://www.globenewswire.com/en/news-release/2022/06/16/2464004/0/en/New-Research-Re-
veals-Microservices-Service-Mesh-Critical-to-Modern-Digital-Transformation-Efforts.html.
6. Niran Even-Chen, Oren Penso, Sergio Pozo, and Susan Wu, “Service Mesh For Dummies, VMware 2nd Special Edition,”
accessed October 25, 2022, https://tanzu.vmware.com/content/ebooks/service-mesh-for-dummies-2022.
7. VMware, “Top Use Cases for VMware Tanzu Service Mesh, Built on VMware NSX,” accessed October 25, 2022,
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmware-tanzu-usecases.pdf.
8. Niran Even-Chen, Oren Penso, Sergio Pozo, and Susan Wu, “Service Mesh For Dummies, VMware 2nd Special Edition,”
accessed October 25, 2022, https://tanzu.vmware.com/content/ebooks/service-mesh-for-dummies-2022.
9. VMware, “VMware Tanzu Service Mesh,” accessed September 26, 2022, https://tanzu.vmware.com/service-mesh.
10. Niran Even-Chen, Oren Penso, Sergio Pozo, and Susan Wu, “Service Mesh For Dummies, VMware 2nd Special Edition,”
accessed October 25, 2022, https://tanzu.vmware.com/content/ebooks/service-mesh-for-dummies-2022.
11. Niran Even-Chen, Oren Penso, Sergio Pozo, and Susan Wu, “Service Mesh For Dummies, VMware 2nd Special Edition.”
12. Niran Evenchen, “Using Global Namespaces and Zero-Trust Policies with VMware Tanzu Service Mesh,” accessed
September 30, 2022, https://tanzu.vmware.com/content/blog/using-global-namespaces-zero-trust-policies-vm-
ware-tanzu-service-mesh.
13. Github, GoogleCloudPlatform/microservices-demo, accessed September 26, 2022,
https://github.com/GoogleCloudPlatform/microservices-demo.
14. Intel, “3rd Gen Intel®
Xeon®
Scalable Processors Brief,” accessed September 26, 2022,
https://www.intel.com/content/www/us/en/products/docs/processors/xeon/3rd-gen-xeon-scalable-processors-brief.html.
15. Intel, “3rd Gen Intel®
Xeon®
Scalable Processors Brief.”
16. Manish Chugtu, “TLS Handshake Acceleration with Tanzu Service Mesh,” accessed September 26, 2022,
https://blogs.vmware.com/networkvirtualization/2022/08/tls-handshake-acceleration-with-tanzu-service-mesh.html.
17. Manish Chugtu, “Tanzu Service Mesh Acceleration using eBPF”, accessed September 26, 2022,
https://blogs.vmware.com/networkvirtualiza-
tion/2022/08/tanzu-service-mesh-accelera-
tion-using-ebpf.html.
18. VMware, “VMware Tanzu Service Mesh,”
accessed September 26, 2022, https://tanzu.
vmware.com/service-mesh.
19. VMware, “VMware Tanzu Service Mesh.”
Conclusion
If your organization uses a microservices Kubernetes architecture, a service mesh is a valuable tool for
coordinating communication and security among services. In our testing, we deployed a microservices
application, distributed over two Kubernetes clusters with secure inter-cluster communications for the services.
We found that using VMware TSM to carry out this task reduced the amount of time necessary by 74 percent
compared to using only Istio. In performance testing of the TSM environment, the TCP bypass optimization
reduced request duration by as much as 11.4 percent and the Intel multi-buffer cryptography optimization for
Intel 3rd
Generation Xeon Scalable processors reduced request duration by up to 47.1 percent while nearly
doubling performance.
Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
For additional information, review the science behind this report.
Principled
Technologies®
Facts matter.®
Principled
Technologies®
Facts matter.®
This project was commissioned by VMware.
Read the science behind this report at https://facts.pt/PSaJ15T
Support your modern distributed microservices applications using VMware Tanzu Service Mesh
on servers enabled by 3rd
Generation Intel Xeon Scalable processors
December 2022 | 9