This document provides an overview and summary of key concepts around virtualization that will be covered in more depth at a technical deep dive session, including:
- Virtualization capabilities for desktops/laptops and servers including workstation virtualization and server consolidation.
- How virtual machines work and the overhead associated with virtualization.
- Properties of virtualization like partitioning, isolation, and encapsulation.
- Benefits of server virtualization like consolidation, simpler management, and automated resource pooling.
- Comparison of "hosted" and vSphere virtualization architectures.
- Technologies used in virtualization like binary translation, hardware assistance from Intel VT/AMD-V.
- Ability to virtualize CPU intensive applications with
The document discusses virtualizing Hadoop clusters on VMware vSphere. It describes how Hadoop enables parallel processing of large datasets across clusters using MapReduce. Virtualizing Hadoop provides benefits like simple operations, high availability, and elastic scaling. The document outlines challenges with using Hadoop and how virtualization addresses them. It provides examples of deploying Hadoop clusters on Serengeti and configuring different distributions. Performance results show little overhead from virtualization and benefits of local storage. Joint engineering with Hortonworks adds high availability to Hadoop master daemons using vSphere features.
Virtualization in the Cloud @ Build a Cloud Day SFO May 2012The Linux Foundation
Virtualization in the Cloud was designed for cloud computing from the outset. Xen was initially a university research project that provided isolation between virtual machines (VMs) and has since become widely used in cloud computing. The Xen Cloud Platform (XCP) provides a complete virtualization stack and management API called XenAPI that allows integration with cloud orchestration platforms like OpenStack. XCP packages Xen, XenAPI, and associated components into Linux distributions for flexibility. XCP provides enterprise-ready virtualization with high performance, security, and scalability for cloud computing.
Choosing the Right Storage for your Server Virtualization EnvironmentTony Pearson
The document discusses storage options for virtualized server environments. It describes the IBM Storwize V7000 disk system, which provides storage performance, utilization and productivity benefits. Key features include multi-platform support, performance optimization, built-in advanced functionality to reduce costs, and high availability and disaster recovery features. The Storwize V7000 uses thin provisioning, automated tiering, replication and data migration to improve efficiency.
This document summarizes a presentation on virtualization solutions from Microsoft and VMware. It provides an overview of key virtualization concepts and the benefits of virtualization. It then covers considerations for capacity planning and highlights features of virtualization management, high availability, live migration, monitoring and guest support in both Microsoft and VMware solutions. Maximum supported virtual machines, CPUs, RAM and versions are listed. The presentation concludes with emphasizing the importance of proper planning and analysis to determine the best virtualization platform and realize ROI/TCO benefits.
SAP Virtualization Week 2012 - The Lego Cloudaidanshribman
This document discusses a research project called Hecatonchire that aims to provide distributed shared memory (DSM) capabilities to cloud computing. Hecatonchire breaks down physical servers into their core components of CPU, memory, and I/O, and extends existing cloud software like KVM, QEMU, and libvirt to allow virtual machines to transparently access remote resources. Some key capabilities discussed include live migration, flash cloning for rapid auto-scaling, memory pooling to access unused memory on remote hosts, and the long term goal of implementing true DSM across the cluster. The presentation was given by researchers from SAP Research in Belfast and Israel.
Architecting Virtualized Infrastructure for Big DataRichard McDougall
This document discusses architecting virtualized infrastructure for big data. It notes that data is growing exponentially and that the value of data now exceeds hardware costs. It advocates using virtualization to simplify and optimize big data infrastructure, enabling flexible provisioning of workloads like Hadoop, SQL, and NoSQL clusters on a unified analytics cloud platform. This platform leverages both shared and local storage to optimize performance while reducing costs.
Manage rising disk prices with storage virtualization webinarHitachi Vantara
Learn how storage virtualization can reclaim existing storage on the floor. Extend thin provisioning to existing storage to increase disk utilization and defer capital purchases. Take advantage of zero reclaim and write same to reclaim storage reclamation.
This document provides an overview and summary of key concepts around virtualization that will be covered in more depth at a technical deep dive session, including:
- Virtualization capabilities for desktops/laptops and servers including workstation virtualization and server consolidation.
- How virtual machines work and the overhead associated with virtualization.
- Properties of virtualization like partitioning, isolation, and encapsulation.
- Benefits of server virtualization like consolidation, simpler management, and automated resource pooling.
- Comparison of "hosted" and vSphere virtualization architectures.
- Technologies used in virtualization like binary translation, hardware assistance from Intel VT/AMD-V.
- Ability to virtualize CPU intensive applications with
The document discusses virtualizing Hadoop clusters on VMware vSphere. It describes how Hadoop enables parallel processing of large datasets across clusters using MapReduce. Virtualizing Hadoop provides benefits like simple operations, high availability, and elastic scaling. The document outlines challenges with using Hadoop and how virtualization addresses them. It provides examples of deploying Hadoop clusters on Serengeti and configuring different distributions. Performance results show little overhead from virtualization and benefits of local storage. Joint engineering with Hortonworks adds high availability to Hadoop master daemons using vSphere features.
Virtualization in the Cloud @ Build a Cloud Day SFO May 2012The Linux Foundation
Virtualization in the Cloud was designed for cloud computing from the outset. Xen was initially a university research project that provided isolation between virtual machines (VMs) and has since become widely used in cloud computing. The Xen Cloud Platform (XCP) provides a complete virtualization stack and management API called XenAPI that allows integration with cloud orchestration platforms like OpenStack. XCP packages Xen, XenAPI, and associated components into Linux distributions for flexibility. XCP provides enterprise-ready virtualization with high performance, security, and scalability for cloud computing.
Choosing the Right Storage for your Server Virtualization EnvironmentTony Pearson
The document discusses storage options for virtualized server environments. It describes the IBM Storwize V7000 disk system, which provides storage performance, utilization and productivity benefits. Key features include multi-platform support, performance optimization, built-in advanced functionality to reduce costs, and high availability and disaster recovery features. The Storwize V7000 uses thin provisioning, automated tiering, replication and data migration to improve efficiency.
This document summarizes a presentation on virtualization solutions from Microsoft and VMware. It provides an overview of key virtualization concepts and the benefits of virtualization. It then covers considerations for capacity planning and highlights features of virtualization management, high availability, live migration, monitoring and guest support in both Microsoft and VMware solutions. Maximum supported virtual machines, CPUs, RAM and versions are listed. The presentation concludes with emphasizing the importance of proper planning and analysis to determine the best virtualization platform and realize ROI/TCO benefits.
SAP Virtualization Week 2012 - The Lego Cloudaidanshribman
This document discusses a research project called Hecatonchire that aims to provide distributed shared memory (DSM) capabilities to cloud computing. Hecatonchire breaks down physical servers into their core components of CPU, memory, and I/O, and extends existing cloud software like KVM, QEMU, and libvirt to allow virtual machines to transparently access remote resources. Some key capabilities discussed include live migration, flash cloning for rapid auto-scaling, memory pooling to access unused memory on remote hosts, and the long term goal of implementing true DSM across the cluster. The presentation was given by researchers from SAP Research in Belfast and Israel.
Architecting Virtualized Infrastructure for Big DataRichard McDougall
This document discusses architecting virtualized infrastructure for big data. It notes that data is growing exponentially and that the value of data now exceeds hardware costs. It advocates using virtualization to simplify and optimize big data infrastructure, enabling flexible provisioning of workloads like Hadoop, SQL, and NoSQL clusters on a unified analytics cloud platform. This platform leverages both shared and local storage to optimize performance while reducing costs.
Manage rising disk prices with storage virtualization webinarHitachi Vantara
Learn how storage virtualization can reclaim existing storage on the floor. Extend thin provisioning to existing storage to increase disk utilization and defer capital purchases. Take advantage of zero reclaim and write same to reclaim storage reclamation.
Solaris 8 containers and solaris 9 containers customer presentationxKinAnx
Solaris 8 Containers and Solaris 9 Containers allow organizations to consolidate multiple legacy Solaris 8 and 9 application environments onto newer Solaris 10 hardware. This provides benefits like reduced costs, improved utilization, and a bridging technology to help migrate applications to Solaris 10 at each organization's own pace while reducing risks. The technology uses Solaris Containers, BrandZ, and other Solaris 10 features to virtualize and run the legacy environments in a compatible way on Solaris 10 systems. It provides a way to phase upgrades by initially deploying applications in Containers and then later redeploying directly on Solaris 10.
Hvordan administrerer og bruker du din VDI-løsning best mulig? Hva kan Microsoft tilby av VDI-drift og hvordan benyttes det i praksis? Vi ser på hvordan bl.a. System Center kan benyttes i en VDI-løsning.
virtualization tutorial at ACM bangalore Compute 2009ACMBangalore
This document summarizes a tutorial on the hardware revolution in server virtualization. It begins with an overview of server virtualization technologies including VMM architectures and the criteria for a processor to be virtualizable. It then discusses the challenges of virtualizing x86 processors due to their architecture. The document outlines software techniques like binary translation and para-virtualization used for CPU, memory, and I/O virtualization. It also reviews hardware techniques enabled by technologies like VT-x, EPT, and SR-IOV. The summary concludes with a brief discussion of future trends in manageability and security relating to server virtualization.
Windows Server 2012 introduceert het gebruik van Storage Pools. Hiermee kunt u zowel USB, externe als interne harde schijven in een Storage Pool plaatsen. Vanuit deze pool kunt u vervolgens zoveel virtuele schijven maken als u nodig heeft. Dit zijn in feite VHD bestanden zoals deze ook al door HyperV gebruikt werden. Server 2012 ondersteunt de RAID versies 0,1 en 5. Wilt u flexibiliteit en file redundancy, zonder een duur SAN aan te hoeven schaffen, dan is deze feature is voor u!
Private cloud virtual reality to reality a partner story daniel mar_technicomMicrosoft Singapore
1. They virtualized their Windows 2003 domain controllers and application servers first using P2V migration for testing.
2. Their Windows 2008 R2 servers were rebuilt new as virtual machines.
3. Their legacy Windows NT and 2000 servers presented challenges due to limited official support but were still virtualized.
4. Storage was configured with multipathing and their workloads distributed across two Hyper-V hosts for high availability.
5. Adding a third host enabled more workloads to be hosted while maintaining the recommended host reserve of 33% RAM.
Advanced performance troubleshooting using esxtopAlan Renouf
This document discusses using esxtop and resxtop tools to troubleshoot performance issues on VMware ESXi hosts. It provides 10 key things to know about esxtop counters and how they work. It then gives examples of using esxtop to troubleshoot common problems like CPU contention, memory issues, network throughput problems, and disk I/O latency. It also lists some other diagnostic tools that can be used along with esxtop.
The document discusses different cloud architectures and lessons learned from 100 CloudStack deployments. It outlines a process for defining a cloud architecture, describing the basic building blocks of a computing cloud. The document differentiates between traditional and cloud workloads, noting that workload reliability requirements drive unique architectural needs. It provides examples of architectures for traditional server virtualization and Amazon-style availability zones.
Building Business Continuity Solutions With Hyper Vrsnarayanan
This document provides an overview and agenda for a session on virtualization and high availability. It discusses types of high availability enabled by virtualization including cluster creation and making virtual machines highly available. It also covers demos of Windows Server 2008 cluster creation and configuring virtual machine high availability. Additional topics include stretch clusters, guest clustering best practices, Hyper-V and network load balancing, disaster recovery and virtualization, and new features in Windows Server 2008 R2 such as live migration.
The document discusses the Infortrend DS series RAID storage system. It provides entry-level DAS and SAN storage for SMBs and enterprise remote sites. The DS series offers FC, iSCSI, and SAS host interfaces across 2U, 3U, and 4U form factors supporting up to 240 drives. It includes the SANWatch management suite for local volume-level replication, thin provisioning, and remote replication functionality. The DS series emphasizes high availability through redundant components, cache protection, RAID 6, and local/remote replication capabilities.
Virtualization allows consolidation of servers to improve efficiency and reduce costs. It addresses challenges like high server maintenance costs, power and cooling expenses from datacenter sprawl, and limited space for physical expansion. Solaris virtualization technologies like containers, logical domains, and the xVM hypervisor enable consolidation while maintaining performance and security. They provide flexibility to adapt resource allocation to business needs and improve resilience against failures or disasters.
DataCore's storage virtualization software provides high availability network attached storage (NAS) by enabling non-disruptive failover of clustered file shares across physical servers. It uses synchronous mirroring of file shares between redundant NAS servers for business continuity. Caching and thin provisioning enhance performance and storage efficiency. The solution provides high availability, faster performance, space savings and disaster recovery protection for NAS environments in a cost-effective way by leveraging existing server infrastructure.
Hadoop 2.0 offers significant HDFS improvements: new append-pipeline, federation, wire compatibility, NameNode HA, performance improvements,
etc. We describe these features and their benefits. We also discuss development that is underway for the next HDFS release. This includes much needed data management features such as Snapshots and Disaster Recovery. We add support for different classes of storage devices such as SSDs and open interfaces such as NFS; together these extend HDFS as a more general storage system. As with every release we will continue improvements to performance, diagnosability and manageability of HDFS.
Citrix XenDesktop 3.0 provides solutions for desktop virtualization using XenDesktop, XenApp, and XenServer. It offers a universal virtualization platform that allows desktop and application virtualization as well as server virtualization. Users can access their desktops and applications from any device. Citrix Provisioning allows deploying a single master OS and application set across many servers and desktops.
During its beta test of TPC 4.2, Insurer reported improved productivity and time-to-value. Enhanced storage resource agents reduced scan run times. New APIs and enhanced topology maps provided an end-to-end view of the environment for better decision making. Real-time monitoring of replication models and role-based access eliminated previously time-consuming manual processes...
HDX provides high-definition multimedia, graphics, and collaboration capabilities for virtual desktops and applications delivered by Citrix XenDesktop and XenApp. Key HDX technologies include Adaptive Display for optimized server-side rendering, Flash Redirection for offloading Flash content to users' devices, Windows Media Redirection for client-side playback, and HDX 3D Pro for GPU-accelerated 3D graphics over low-bandwidth connections. HDX RealTime enhances real-time communications with features like UDP/RTP support and packet tagging for quality of service.
The document discusses System Center Virtual Machine Manager (SCVMM) 2012. It provides an introduction to new features in SCVMM 2012 including highly available VMM servers, upgrade capabilities, custom properties, expanded PowerShell support, bare metal provisioning, hypervisor support, network and storage management, update management, dynamic optimization, power management, and more. It also includes an agenda for a presentation on SCVMM 2012 that will demonstrate some of these new capabilities.
This document summarizes a presentation about SQL Server 2012 high availability and disaster recovery options. It discusses key disaster recovery terms, how to approach risk management, and different SQL Server high availability and disaster recovery solutions like log shipping, replication, failover clustering, and AlwaysOn availability groups. It also covers new features in SQL Server 2012 and Windows Server 2012 that improve high availability and disaster recovery capabilities.
The document discusses VMware vSphere 4.0, which delivers virtualization solutions for compute, storage, and networking. It provides industry-leading consolidation ratios through features like virtual SMP and large memory support for virtual machines. vSphere 4.0 improves efficiency through technologies like distributed resource scheduling, storage thin provisioning, and fault tolerance. It also maximizes application availability, security, and scalability through integrated services.
Covers the problems of achieving scalability in server farm environments and how distributed data grids provide in-memory storage and boost performance. Includes summary of ScaleOut Software product offerings including ScaleOut State Server and Grid Computing Edition.
Varrow Q4 Lunch & Learn Presentation - Virtualizing Business Critical Applica...Andrew Miller
This document provides a summary of a presentation on virtualizing tier one applications. The presentation covered the top 10 myths about virtualizing business critical applications and provided best practices for virtualizing mission critical applications. It also discussed real world tools for monitoring virtualized environments like Confio IgniteVM and vCenter Operations. The presentation aimed to show that virtualizing tier one applications is possible and discussed strategies for virtualizing SQL Server and Microsoft Exchange environments.
Solaris 8 containers and solaris 9 containers customer presentationxKinAnx
Solaris 8 Containers and Solaris 9 Containers allow organizations to consolidate multiple legacy Solaris 8 and 9 application environments onto newer Solaris 10 hardware. This provides benefits like reduced costs, improved utilization, and a bridging technology to help migrate applications to Solaris 10 at each organization's own pace while reducing risks. The technology uses Solaris Containers, BrandZ, and other Solaris 10 features to virtualize and run the legacy environments in a compatible way on Solaris 10 systems. It provides a way to phase upgrades by initially deploying applications in Containers and then later redeploying directly on Solaris 10.
Hvordan administrerer og bruker du din VDI-løsning best mulig? Hva kan Microsoft tilby av VDI-drift og hvordan benyttes det i praksis? Vi ser på hvordan bl.a. System Center kan benyttes i en VDI-løsning.
virtualization tutorial at ACM bangalore Compute 2009ACMBangalore
This document summarizes a tutorial on the hardware revolution in server virtualization. It begins with an overview of server virtualization technologies including VMM architectures and the criteria for a processor to be virtualizable. It then discusses the challenges of virtualizing x86 processors due to their architecture. The document outlines software techniques like binary translation and para-virtualization used for CPU, memory, and I/O virtualization. It also reviews hardware techniques enabled by technologies like VT-x, EPT, and SR-IOV. The summary concludes with a brief discussion of future trends in manageability and security relating to server virtualization.
Windows Server 2012 introduceert het gebruik van Storage Pools. Hiermee kunt u zowel USB, externe als interne harde schijven in een Storage Pool plaatsen. Vanuit deze pool kunt u vervolgens zoveel virtuele schijven maken als u nodig heeft. Dit zijn in feite VHD bestanden zoals deze ook al door HyperV gebruikt werden. Server 2012 ondersteunt de RAID versies 0,1 en 5. Wilt u flexibiliteit en file redundancy, zonder een duur SAN aan te hoeven schaffen, dan is deze feature is voor u!
Private cloud virtual reality to reality a partner story daniel mar_technicomMicrosoft Singapore
1. They virtualized their Windows 2003 domain controllers and application servers first using P2V migration for testing.
2. Their Windows 2008 R2 servers were rebuilt new as virtual machines.
3. Their legacy Windows NT and 2000 servers presented challenges due to limited official support but were still virtualized.
4. Storage was configured with multipathing and their workloads distributed across two Hyper-V hosts for high availability.
5. Adding a third host enabled more workloads to be hosted while maintaining the recommended host reserve of 33% RAM.
Advanced performance troubleshooting using esxtopAlan Renouf
This document discusses using esxtop and resxtop tools to troubleshoot performance issues on VMware ESXi hosts. It provides 10 key things to know about esxtop counters and how they work. It then gives examples of using esxtop to troubleshoot common problems like CPU contention, memory issues, network throughput problems, and disk I/O latency. It also lists some other diagnostic tools that can be used along with esxtop.
The document discusses different cloud architectures and lessons learned from 100 CloudStack deployments. It outlines a process for defining a cloud architecture, describing the basic building blocks of a computing cloud. The document differentiates between traditional and cloud workloads, noting that workload reliability requirements drive unique architectural needs. It provides examples of architectures for traditional server virtualization and Amazon-style availability zones.
Building Business Continuity Solutions With Hyper Vrsnarayanan
This document provides an overview and agenda for a session on virtualization and high availability. It discusses types of high availability enabled by virtualization including cluster creation and making virtual machines highly available. It also covers demos of Windows Server 2008 cluster creation and configuring virtual machine high availability. Additional topics include stretch clusters, guest clustering best practices, Hyper-V and network load balancing, disaster recovery and virtualization, and new features in Windows Server 2008 R2 such as live migration.
The document discusses the Infortrend DS series RAID storage system. It provides entry-level DAS and SAN storage for SMBs and enterprise remote sites. The DS series offers FC, iSCSI, and SAS host interfaces across 2U, 3U, and 4U form factors supporting up to 240 drives. It includes the SANWatch management suite for local volume-level replication, thin provisioning, and remote replication functionality. The DS series emphasizes high availability through redundant components, cache protection, RAID 6, and local/remote replication capabilities.
Virtualization allows consolidation of servers to improve efficiency and reduce costs. It addresses challenges like high server maintenance costs, power and cooling expenses from datacenter sprawl, and limited space for physical expansion. Solaris virtualization technologies like containers, logical domains, and the xVM hypervisor enable consolidation while maintaining performance and security. They provide flexibility to adapt resource allocation to business needs and improve resilience against failures or disasters.
DataCore's storage virtualization software provides high availability network attached storage (NAS) by enabling non-disruptive failover of clustered file shares across physical servers. It uses synchronous mirroring of file shares between redundant NAS servers for business continuity. Caching and thin provisioning enhance performance and storage efficiency. The solution provides high availability, faster performance, space savings and disaster recovery protection for NAS environments in a cost-effective way by leveraging existing server infrastructure.
Hadoop 2.0 offers significant HDFS improvements: new append-pipeline, federation, wire compatibility, NameNode HA, performance improvements,
etc. We describe these features and their benefits. We also discuss development that is underway for the next HDFS release. This includes much needed data management features such as Snapshots and Disaster Recovery. We add support for different classes of storage devices such as SSDs and open interfaces such as NFS; together these extend HDFS as a more general storage system. As with every release we will continue improvements to performance, diagnosability and manageability of HDFS.
Citrix XenDesktop 3.0 provides solutions for desktop virtualization using XenDesktop, XenApp, and XenServer. It offers a universal virtualization platform that allows desktop and application virtualization as well as server virtualization. Users can access their desktops and applications from any device. Citrix Provisioning allows deploying a single master OS and application set across many servers and desktops.
During its beta test of TPC 4.2, Insurer reported improved productivity and time-to-value. Enhanced storage resource agents reduced scan run times. New APIs and enhanced topology maps provided an end-to-end view of the environment for better decision making. Real-time monitoring of replication models and role-based access eliminated previously time-consuming manual processes...
HDX provides high-definition multimedia, graphics, and collaboration capabilities for virtual desktops and applications delivered by Citrix XenDesktop and XenApp. Key HDX technologies include Adaptive Display for optimized server-side rendering, Flash Redirection for offloading Flash content to users' devices, Windows Media Redirection for client-side playback, and HDX 3D Pro for GPU-accelerated 3D graphics over low-bandwidth connections. HDX RealTime enhances real-time communications with features like UDP/RTP support and packet tagging for quality of service.
The document discusses System Center Virtual Machine Manager (SCVMM) 2012. It provides an introduction to new features in SCVMM 2012 including highly available VMM servers, upgrade capabilities, custom properties, expanded PowerShell support, bare metal provisioning, hypervisor support, network and storage management, update management, dynamic optimization, power management, and more. It also includes an agenda for a presentation on SCVMM 2012 that will demonstrate some of these new capabilities.
This document summarizes a presentation about SQL Server 2012 high availability and disaster recovery options. It discusses key disaster recovery terms, how to approach risk management, and different SQL Server high availability and disaster recovery solutions like log shipping, replication, failover clustering, and AlwaysOn availability groups. It also covers new features in SQL Server 2012 and Windows Server 2012 that improve high availability and disaster recovery capabilities.
The document discusses VMware vSphere 4.0, which delivers virtualization solutions for compute, storage, and networking. It provides industry-leading consolidation ratios through features like virtual SMP and large memory support for virtual machines. vSphere 4.0 improves efficiency through technologies like distributed resource scheduling, storage thin provisioning, and fault tolerance. It also maximizes application availability, security, and scalability through integrated services.
Covers the problems of achieving scalability in server farm environments and how distributed data grids provide in-memory storage and boost performance. Includes summary of ScaleOut Software product offerings including ScaleOut State Server and Grid Computing Edition.
Varrow Q4 Lunch & Learn Presentation - Virtualizing Business Critical Applica...Andrew Miller
This document provides a summary of a presentation on virtualizing tier one applications. The presentation covered the top 10 myths about virtualizing business critical applications and provided best practices for virtualizing mission critical applications. It also discussed real world tools for monitoring virtualized environments like Confio IgniteVM and vCenter Operations. The presentation aimed to show that virtualizing tier one applications is possible and discussed strategies for virtualizing SQL Server and Microsoft Exchange environments.
Hadoop World 2011: Hadoop as a Service in CloudCloudera, Inc.
Hadoop framework is often built on native environment with commodity hardware as its original design. However, with growing tendency of cloud computing, there is stronger requirement to build hadoop cluster on a public/private cloud in order for customers to benefit from virtualization and multi-tenancy. My speech want to introduce some challenges to provide hadoop service on virtualization platform like: performance, rack awareness, job scheduling, memory over commitment, etc and propose some solutions.
Virtualization products partition physical servers in multiple virtual machines. Each virtual machine represents a complete system, with processors, memory, networking, storage and BIOS.
Multiple virtual machines can share physical resources and run side by side on the same server.
Operating systems and applications can run unmodified in virtual machines.
Virtualizing Tier One Applications - VarrowAndrew Miller
This document provides best practices for virtualizing mission critical applications like Exchange and SQL Server. It discusses the top 10 myths about virtualizing business critical applications and provides the truths. It then discusses best practices for virtualizing Exchange, including starting simple, licensing, storage configuration, and high availability options. For SQL Server, it covers starting simple, licensing, storage configuration, migrating, and database best practices. It also discusses tools that can be used for database performance analysis when virtualized like Confio IgniteVM and vCenter Operations.
VMware End-User-Computing Best Practices PosterVMware Academy
This document provides best practices for configuring and managing various VMware Horizon and related products in a virtual desktop infrastructure (VDI) environment. It includes recommendations for installing and updating agents in the proper order, sizing infrastructure components appropriately based on the number of users and sessions, optimizing master images, balancing performance and cost considerations, and leveraging tools like App Volumes and User Environment Manager to improve management and end user experience. The document emphasizes the importance of testing, monitoring, and following established norms and limits to ensure a reliable and scalable VDI deployment.
The document summarizes a benchmarking study conducted by Altoros Systems to compare the performance of Couchbase Server, MongoDB, and Cassandra. It outlines the benchmark goals of having a reproducible workload, using a realistic scenario, and comparing latency and throughput. It describes the benchmarking tools, scenario details involving data size, operations, and hardware configuration. Configuration details are provided for each database, including cluster specifications and parameter settings.
On Sep 21, 2012, Renat Khasanshyn, CEO @ Altoros, made a session “Benchmarking Couchbase Server” at CouchConf that was held in San Francisco. In his session, Renat highlighted that all NoSQL vendors say their databases are fast and scalable, but this is not really helpful for end users.
Virtualization allows multiple operating systems to run simultaneously on the same hardware. It provides benefits such as reduced costs, increased hardware utilization, and isolation of virtual machines. Popular virtualization providers include VMware, Red Hat, and Citrix, with VMware's Workstation, GSX Server, and ESX Server being useful virtualization products. Virtualization offers advantages like testing flexibility and disaster recovery benefits.
Hadoop clusters can be provisioned quickly and easily on virtual infrastructure using techniques like linked clones and thin provisioning. This allows Hadoop to leverage capabilities of virtualization like high availability, resource controls, and re-using spare resources. Shared storage like SAN is useful for VM images and metadata, while local disks provide scalable bandwidth for HDFS data. Virtualizing Hadoop simplifies operations and enables flexible, on-demand provisioning of Hadoop clusters.
1) Cloud platforms can support big data workloads through virtualization which provides agility, isolation, lower costs, and operational efficiency.
2) Modern networks with spine-leaf architectures are well-suited for big data by providing uniform high bandwidth connectivity. This allows for new converged and separated storage models.
3) New distributed storage solutions like HDFS, Ceph, and scale-out NAS provide much higher capacity at lower costs than traditional SAN/NAS. They also offer features like erasure coding, snapshots, cloning and geo-replication.
Presentation architecting a cloud infrastructuresolarisyourep
This document provides an agenda and overview for a session on architecting a cloud infrastructure. The agenda includes introductions, gathering requirements, sizing and scaling, host design, vCenter design, cluster design, networking and storage considerations. It emphasizes the importance of gathering requirements from customers and conceptualizing the design based on those requirements. It also discusses various design considerations and best practices for each component of a cloud infrastructure.
Presentation architecting a cloud infrastructurexKinAnx
This document provides an agenda and overview for a session on architecting a cloud infrastructure. The agenda includes introductions, gathering requirements, sizing and scaling, host design, vCenter design, cluster design, networking and storage considerations. It emphasizes the importance of gathering requirements from customers and conceptualizing the design based on those requirements. It also discusses various design considerations and best practices for each component of a cloud infrastructure.
With AWS you can choose the right database for the right job. Given the myriad of choices, from relational databases to non-relational stores, this session will profile details and examples of some of the choices available to you (MySQL, RDS, Elasticache, Redis, Cassandra, MongoDB and DynamoDB), with details on real world deployments from customers using Amazon RDS, ElastiCache and DynamoDB.
If you need to build highly performant, mission critical ,microservice-based system following DevOps best practices, you should definitely check Service Fabric!
Service Fabric is one of the most interesting services Azure offers today. It provide unique capabilities outperforming competitor products.
We are seeing global companies start to use Service Fabric for their mission critical solutions.
In this talk we explore the current state of Service Fabric and dive deeper to highlight best practices and design patterns.
We will cover the following topics:
• Service Fabric Core Concepts
• Cluster Planning and Management
• Stateless Services
• Stateful Services
• Actor Model
• Availability and reliability
• Scalability and perfromance
• Diganostics and Monitoring
• Containers
• Testing
• IoT
Live broadcast on https://www.youtube.com/watch?v=Zuxfhpab6xo
Updates to Apache CloudStack and LINBIT SDSShapeBlue
In this session, speakers Giles Sirett and Philipp Reisner shared insights into CloudStack and LINBIT. Giles detailed Apache CloudStack’s scalability, multi-tenancy, and compatibility with various hypervisors. He also discusses CloudStack’s integrated, easy-to-use nature, rapid time-to-value, and its active community. Following this, Giles delves into different use cases, such as IaaS/Cloud Provisioning, Disaster recovery, Sovereign Clouds, and the list goes on. CloudStack’s features, including its support for Kubernetes clusters, its scalable architecture, high availability and other features were also discussed.
Following this, Philipp highlighted the 4 key ways in which LINBIT can help an organisation: ‘Protecting data, Always Keeping Your Services On, Shaping Your Destiny and Exceeding with Best Performance”. Philipp also delved into the different reasons why LINBIT SDS is so fast, and what the next steps are for DRBD, LINSTOR and the LINSTOR Driver for CloudStack.
-----------------------------------------
On October 10th 2023, ShapeBlue, Ampere Computing and LINBIT held a joint virtual event – Building Next-Generation IaaS. The event explored how the synergy between ARM, Apache CloudStack and LINBIT’s storage solutions can achieve a formidable price-to-performance ratio. There were a total of 3 sessions held by speakers from all 3 organisations.
Similar to Scaling With Sun Systems For MySQL Jan09 (20)
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
HCL Notes and Domino License Cost Reduction in the World of DLAU
Scaling With Sun Systems For MySQL Jan09
1. GET MORE FROM YOUR WEB SERVICE
Scaling MySQL By
Leveraging Sun Systems
Steve Staso
Chief Architect, Web Infrastructure Solutions
Global Systems Practice
1
5. How to Scale?
Network • Distribute the connections over
multiple servers
Load • Increase the number of NICs and
networks
• More CPUs help up to optimal thread
count, more than that is useless
CPU • Scale-out can impact app server
Load activity, scalability can be difficult
• Reduce the logic in the DB server
• More memory is always good
RAM/ • A scale-out can increase the
complexity of the environment
Caching • External distributed caching
• Faster disks and controllers always
I/O help
Load • Scale-out is the best option after an
initial optimization at server level
Storage • SAN and NAS for large data centers
Requirement • Scale-out is often cost effective
5
6. Choose the Right Server
Architecture
Server Scale-up vs. Scale-out for Database Deployments
Scaling Up
Scaling Out 6
7. Choose the Right Server
Architecture
Server Scale-up vs. Scale-out for Database Deployments
Scaling Up
Scaling Out 7
8. Choose the Right Server
Architecture
Server Scale-up vs. Scale-out for Database Deployments
Scaling Up
g
alin
l Sc
ona
Diag
Scaling Out 8
9. Virtualize
Methods, benefits,
recommendations
Memory latency Compute
Build Next Generation Virtual Datacenter
Increase utilization, less heat
and energy usage
Up to 10 x better price/performance
9
10. Implementations of Virtualization
Type I Type I Type II
App App App App App App App App App App
App App App App App App App App App App
App App App App App App App App App App
OS OS OS OS OS OS
Zone
OS OS LD LD Zone
VM VM VM VM
Hypervisor Hypervisor Zone Support
OS
Software
Hypervisor Hardware
Hardware
Virtualization Enablement Layer
Hardware OS
Hardware Hardware Hardware
Hardware Hardware based Software based Desktop OS Virtualization 10
Partitioning System Virtualization Virtualization
12. Server/OS Virtualization
Sun Server Virtualization = Decreased Costs, Reduced Complexity
CoolThreads Servers x64/x86 Servers
• Integrated, open source, no cost, and • Most powerful, scalable, virtualized
flexible virtualization technology – designs, operating on today’s range of
Logical Domains (LDoms) OS options
• Record-breaking performance • Choice of hypervisor and OS allows for
• Breakthrough energy and space investment protection
efficiency • Available in racks or blades
• Available in racks or blades
LDOMS
12
14. The Benchmark
●
Red Hat Ent Linux 5.1 64 bit
MySQL testbed ●
Solaris 10 x86_64 and SPARC 64
mysql 5.1.26 rc 64 bit
environment
●
●
mysql coolstack 1.3.1 (based on 5.1.26) 64 bit
●
100 warehouses created with the datagen utility
DBT2 ●
15 GB of data generated for each DB
Extra table used to set random conditions
datagen
●
●
Extra tables added for transaction count
SP calls to:
DBT2 ●
Delivery ●
Order Status
Stock Level
New Order
Stored Procedures
● ●
●
Payment
●
100 calls:
DBT2 call set with ●
4 Delivery ●
4 Order Status
4 Stock Level
random IDs 45 New Order
● ●
●
43 Payment ●
+ internal SP calls
●
Shoot-out with: ●
512k complex transactions
●
1,2,4,8,16,32,64,128,256 concurrent connections ●
51,200 per iteration
mysqlslap ●
10 iterations ●
250M single queries
●
Warm-up (cold) and hot phases ●
high peak of 40k qps with “s” 14
15. The Systems
For MySQL Scaling
●
2xAMD 2220 Dual Core 2.8Ghz, 1MB Cache/core
●
12 GB RAM, 73 GB SAS Drives 15krpm
●
2U Rack Unit, 550PS
●
MySQL x 1 instance 8GB buffer pool
Sun Fire x4200 Server ●
Estimated List Price: US$5,888
●
8xAMD 8220 Dual Core 2.8Ghz, 1MB Cache/core
●
64 GB RAM, 73 GB 15k SAS Drives+External Storage
●
4U Rack Unit, 850PS
●
MySQL x 4 instances 6GB buffer pool
Sun Fire x4600 Server ●
Estimated List Price: US$29,995
●
1T2 8 Cores 64 Threads 1.4Ghz, 4MB Cache
●
64 GB RAM, 73 GB 15k SAS Drives+External Storage
●
2U Rack Unit, 750PS
●
MySQL x 6 instances 6GB buffer pool
Sun Fire T5220 Server ●
Estimated List Price: US$32,115 15
16. The Database
MySQL Enterprise Solution
Enterprise software and services delivered in an annual subscription
●
The most up-to-date MySQL Enterprise
software
Database ●
Monthly rapid updates
●
Quarterly service packs
●
Hot-fix program • Subscription:
●
Indemnification • MySQL Enterprise
• License (OEM):
• Embedded Server
●
Virtual database assistant
• Support
●
Global monitoring of all servers
• MySQL Cluster Carrier-Grade
Monitoring ●
Web-based central console
●
• Training
Built-in advisors, expert advice
● • Consulting
Problem query detection/analysis
• NRE
●
Online self-help MySQL Knowledge Base
●
24/7 problem resolution with priority
Support escalation
●
Consultative help
●
High-Availability and Scale-Out
16
22. The application life-cycle
The takeaway for how and when to scale
Start-up
●
Single instance
●
Small box or full virtualization Sun Fire x4200
Digital Entrepreneur
Sun Fire x4200
●
Multiple instances
●
Virtualized, consolidated
environment
Sun Fire x4600
Enterprise
●
Multiple instances
●
Virtualized, consolidated Sun FireT5220
environment 22
23. What Can Sun Systems for MySQL
Do for Your Web Deployments
●
Linux, OpenSolaris, Solaris and Windows; Intel,
AMD and SPARC
●
Up to 3x more transactions, 3x less power &
space,10x price/performance
● Open Storage delivers 2x better storage density,
2x better price/performance,10x the capacity
● Deliver competitive advantage with fast I/O, large
memory, optimized Web Stack, system design
innovations, SSDs, open source virtualization
● Reduce power, space, cooling costs
● Get to market faster with new Web services
● Scalability to support millions of users
● Free 60 day Try & Buy of systems plus MySQL
Enterprise, get up to 40% off to keep
23
24. World Record Performance
Best19% Faster than the Dell PE R900
x86 single Java Virtual Machine
SPEC® JBB2005
performance on SPEC®jbb2005 benchmark
Sun Fire X4450 Server
• Solaris 10 10/08 Operating system
• Java HotSpot™software version 1.6.0_06 Performance Release
Targeted at enterprise customers looking for exceptional business
process performance in a dense 2RU, 24-processor core platform
Source: SPEC and SPECjbb are registered trademarks of the Standard Performance Evaluation Corporation. Competitive benchmark results reflect data published as of 9/12/08. For the latest results, visit http://www.spec.org.Sun Fire X4450
24
(Intel Xeon X7460, 24 cores, 4 chips, 6 cores/chip, Solaris 10): 448,262 SPECjbb2005 bops, 448,262 SPECjbb2005 bops/JVM.
25. Half the Space!
X4450 DL580 G5 PE R900 X3850 M2
Reduced Operating Costs for the Eco Enterprise
25
26. Sun™ Blade Servers: Superior
Flexibility and Efficiency
Modular architecture delivers flexibility and efficiencies
• Aggregation of
multiple servers
Power
• Common power,
cooling, and I/O
improves efficiency Compute
and reliability
• Modular hot-swappable Storage
Modular
form factor improves
serviceability computing Cooling
I/O
Management
26
27. ●
DTrace: safe, comprehensive
observability
●
Predictive Self Healing for
reliability
●
ZFS: innovative approach to data
Mgmt, scalability, integrity and
performance
●
Record setting performance
●
Built-in virtualization
●
Over 1000 x86 and SPARC
systems supported
●
180+ open source applications
Fast and Open; Optimized for the Web
27
28. Professional Network Site Increases
Performance by 54% on MySQL
Business Issues
• Fast growth was causing reduced
MySQL database response times
• Needed scale and manageability for
exponential growth
Sun Solution “By using Sun products and Sun
• Sun & MySQL Enterprise Platinum,
Professional Services for our
Professional Services solution, we can scale horizontally,
• Sun Servers & Solaris and we can scale vertically. And we
don't have to change one line of
Business Results our software code to run dual-core,
• 54% improvement in query quad-core, or sixteen-core
performance machines – or any other hardware
• 39% reduction in database footprint that Sun provides. ”
• Scalable, manageable infrastructure ― Jean-Luc Vaillant, CTO, LinkedIn
for further growth sun.com/customers
28
29. Sun Systems for MySQL Virtualization
Reduce Environmental Costs, Virtualize & Scale for Maximum Eco-Efficiency
●
10x prove price/performance, 3x
more throughput, 83% less power
●
Fast, free, open hypervisor, low
cost storage arrays
●
Breakthrough throughput, eco-
efficiency and reliability
●
Consolidate up to 128 virtual
MySQL servers in 1U/blade format
●
Scale MySQL with TomCat, Apache,
Lighttpd, SugarCRM, Drupal, others
●
Try risk, cost, hassle free, get up to
40% off to convert Try & Buy
T5220 T6320
Scaling Sky-high for MySQL Virtualization
29
30. Virtualization with LDoms
Tomcat running JPetStore, MySQL Backend
9000
Transactions per Second (TPS) 8000
7000
6000
5000
4000
3000
2000
1000
0
1 2 4 6 8 10 12
Number of Logical Domains
• LDoms & CoolThreads improve scalability and utilization
• Blueprint demonstrating how LDoms enabled a TomCat / MySQL
service to scale 10x when compared to a single application instance.
• http://wikis.sun.com/download/attachments/24543563/820-4995.pdf 30
31. Messaging Services Innovator Gets
10X Better MySQL Price/Performance
Business Issues
•Deliver highly scalable advanced
messaging services
•Process messages faster and at reduced
cost, operate more efficiently
Sun Solution “We are a company that believes
•Sun & MySQL Enterprise Platinum, in empowering our customers,
Professional Services and that power for us comes from
•Sun Servers & Solaris Sun. With Sun technology, the
only limitation on what we can
Business Results deliver is our ability to dream. If
•4.5x higher performance, 2x headroom we can dream it, we can do it ”
•4x less, 83% less power use ― Jason Williams, CTO at DigiTar
•Storage admin from weeks to hours www.sun.com/customers
•10x better $/performance for MySQL
infrastructure
31
32. Sun Systems MySQL Rich Media Storage
Gain Control of Exploding Storage Costs for Rich Media
●
2x cost/performance over closest
competitive offering
●
Industry's highest data throughput
●
15% less than HP with 2x density,
10% less Dell at nearly 3x density
●
Up to 70% less power, cooling
J7000 ●
Reduces common admin tasks by
as much as 82%
J4200 x4540 ●
Ideal for rapid Rich Media growth:
photo, video, audio
●
Try risk, cost and hassle free, get
20% off to convert Try & Buy
Store Rich Media Without Paying the Price
32
33. Tune and Scale MySQL: Providing
Unprecedented Storage Analytics
• Automatic real-time visualization of
application and storage related workloads
• Solve performance issues through
understanding data usage
• Simple, sophisticated instrumentation
with real-time comprehensive analysis
• Supports multiple simultaneous
application
and workload analysis in real- time
• Analysis can be saved, exported and
replayed for further analysis.
• Built on DTrace instrumentation
33
34. ZFS Hybrid Storage Pool
Sun X4250 Storage Server Example
Configuration A Configuration B
●
4 Xeon 7350 Processors
●
32GB FB DDR2 ECC DRAM
●
OpenSolaris with ZFS (1) 80G SSD Cache
Device
(1) 32G SSD ZIL
Device
(7) 146GB 10,000 RPM SAS Drives (5) 400GB 4200 RPM SATA Drives
34
35. ZFS Hybrid Pool Example
Based on Actual Benchmark Results
4.9x
3.2x 4%
2x
11%
Read IOPs Write IOPs Cost Storage Power Raw Capacity
(Watts) (TB)
Hybrid Storage Pool (DRAM + Read SSD + Write SSD + 5x 4200 RPM SATA)
Traditional Storage Pool (DRAM + 7x 10K RPM 2.5”)
35
36. MySQL Unlimited
• Fixed annual subscription MySQL
> Unlimited servers Enterprise
> Unlimited CPUs Unlimited
> Unlimited cores
• Simplify
> No counting
> No compliance issues
• Pricing
> No proprietary DBMS license fees
> Price starts at $40K/year
36
38. Start Scaling Your MySQL With
Sun Systems for MySQL
Learn More Try it Yourself
• Try a Sun system free for 60 days with
• Download MySQL TCO White MySQL Enterprise
paper • Kick the tires. Check under the hood.
• Download “Scaling Beyond Test it. Stress it.
x86: Using LDOMS” • Get up to 40% to convert Try & Buy to
purchase
• Buy it or return it and pay nothing – not
even shipping
sun.com/mysqlsystems sun.com/tryandbuy 38
39. Performance Tuning - Benchmarks - Cloud Computing
Data Warehousing - Business Intelligence - Replication
Scale-Out - Java, PHP, .NET, Ruby & AJAX
High Availability - MySQL Cluster - And much more…
Early Registration Now Open!
https://en.oreilly.com/mysql2009/public/register/ 39
40. GET MORE FROM YOUR WEB SERVICE
Scaling MySQL By
Leveraging Sun Systems
Learn More:
http://www.sun.com/mysqlsystems
Steve Staso
Sun Microsystems
40