This document discusses cluster configuration best practices for Windows Server 2003. It describes different cluster topologies and their advantages/disadvantages based on application deployment needs. Key factors discussed include server load distribution, application design as single or multiple instances, failover policies, and how these impact performance and high availability. The document provides examples to illustrate different configuration options and tradeoffs.
- The document discusses the basics of Windows clustering and quorum models. It defines what clustering is and why it is used for high availability of critical applications.
- It covers the differences between Windows 2008 and 2012 clustering capabilities. Windows 2012 supports more nodes, built-in iSCSI, and file/storage services.
- Common clustering terms are defined like active/passive, heartbeat network, shared disks, quorum, cluster resources, and groups.
- Quorum configuration options are reviewed including typical settings, adding witnesses, and advanced configurations. Quorum helps ensure only one cluster remains active if communication is lost.
The document discusses relocating Windows SharePoint Services databases to a new location. It provides steps to detach databases from SQL Server, move the database files, and reattach the databases. This process requires stopping the SharePoint Timer and World Wide Web Publishing services, detaching databases in SQL Server Management Studio, copying the files to the new location, and then reattaching the databases before restarting the services. Relocating databases is necessary when the default location runs out of space, such as with the SQL Server 2005 Express Embedded Edition which stores databases on the C drive by default.
This document discusses SQL Server replication and provides steps to configure snapshot replication. Snapshot replication allows entire databases or parts of databases to be copied from a publisher to subscribers. It works by having a snapshot agent capture the database objects and data and store them as snapshot files for distribution agents to apply at subscribers. The steps shown configure a publisher, distributor, publication, and push subscription to replicate tables from one database to another using snapshot replication.
Sql server 2008 replication and database mirroring white paperKlaudiia Jacome
This white paper describes how to use database mirroring to increase the availability of the replication stream in a transactional SQL Server environment. It covers setting up replication with a mirrored publication or subscription database, how mirroring state changes and failovers affect the replication log reader and distribution agents, and how to recover the replication stream following a mirroring failover using LSN-based initialization. The goal is to provide high availability for the replication topology through the use of database mirroring.
This document discusses a proposed cost effective failover clustering system that allows organizations to use different database platforms together in a high availability cluster. The system would use a common interfacing technique to connect two non-similar database platforms. This would allow one database to be licensed while the other could be freeware. A central service would monitor the databases, trace queries to the main database, and redirect clients to the backup database if the main database fails, minimizing downtime. The service would also update the main database with any changes from the backup database after the main database is restored. This approach could help organizations save costs by allowing the use of a licensed and freeware database in the same failover cluster.
Abstract Every company or organization has to use a database for storing information. But what if this database fails? So it behooves the company to use another database for backup purpose. This is called as failover clustering. For making this clustering manageable and lucid the corporat people spend more money on buying a licensed copy for both, the core database and the redundant database. This overhead can be avoided using a database with non-similar platform in which one database is licensed and other may be freeware. If platforms are similar then the transactions between those databases become simpler. But it won’t be the case with non-similar platforms. Hence to overcome this problem in both cases cost effective failover clustering is proposed. For designing such system a common interfacing technique between two non-similar database platforms has to be developed. This will make provision for using any two database platforms for failover clustering. Keywords: CEFC (Cost Effective Failover Clustering), DAL (Data Access Layer), DB (Database), HA (High Availability)
Using MS-SQL Server with Visual DataFlexwebhostingguy
This document discusses migrating a Visual DataFlex application from using native DataFlex databases to using Microsoft SQL Server. It provides steps to download and install SQL Server Express, load the SQL Server connectivity driver in Visual DataFlex, and use a conversion wizard to migrate tables to SQL Server. Screenshots show the conversion process and how the migrated tables appear in SQL Server Management Studio. Maintaining the migrated application is also briefly discussed.
The document provides an overview of Windows Server Update Services (WSUS) 3.0, which allows administrators to deploy and manage Microsoft product updates. Key features include improved ease of use through a new administration console, enhanced deployment options such as more flexible update approvals and server hierarchies, and improved performance. The document discusses WSUS server and client requirements, server-side features like update targeting and notifications, and deployment scenarios for single or multiple WSUS servers.
- The document discusses the basics of Windows clustering and quorum models. It defines what clustering is and why it is used for high availability of critical applications.
- It covers the differences between Windows 2008 and 2012 clustering capabilities. Windows 2012 supports more nodes, built-in iSCSI, and file/storage services.
- Common clustering terms are defined like active/passive, heartbeat network, shared disks, quorum, cluster resources, and groups.
- Quorum configuration options are reviewed including typical settings, adding witnesses, and advanced configurations. Quorum helps ensure only one cluster remains active if communication is lost.
The document discusses relocating Windows SharePoint Services databases to a new location. It provides steps to detach databases from SQL Server, move the database files, and reattach the databases. This process requires stopping the SharePoint Timer and World Wide Web Publishing services, detaching databases in SQL Server Management Studio, copying the files to the new location, and then reattaching the databases before restarting the services. Relocating databases is necessary when the default location runs out of space, such as with the SQL Server 2005 Express Embedded Edition which stores databases on the C drive by default.
This document discusses SQL Server replication and provides steps to configure snapshot replication. Snapshot replication allows entire databases or parts of databases to be copied from a publisher to subscribers. It works by having a snapshot agent capture the database objects and data and store them as snapshot files for distribution agents to apply at subscribers. The steps shown configure a publisher, distributor, publication, and push subscription to replicate tables from one database to another using snapshot replication.
Sql server 2008 replication and database mirroring white paperKlaudiia Jacome
This white paper describes how to use database mirroring to increase the availability of the replication stream in a transactional SQL Server environment. It covers setting up replication with a mirrored publication or subscription database, how mirroring state changes and failovers affect the replication log reader and distribution agents, and how to recover the replication stream following a mirroring failover using LSN-based initialization. The goal is to provide high availability for the replication topology through the use of database mirroring.
This document discusses a proposed cost effective failover clustering system that allows organizations to use different database platforms together in a high availability cluster. The system would use a common interfacing technique to connect two non-similar database platforms. This would allow one database to be licensed while the other could be freeware. A central service would monitor the databases, trace queries to the main database, and redirect clients to the backup database if the main database fails, minimizing downtime. The service would also update the main database with any changes from the backup database after the main database is restored. This approach could help organizations save costs by allowing the use of a licensed and freeware database in the same failover cluster.
Abstract Every company or organization has to use a database for storing information. But what if this database fails? So it behooves the company to use another database for backup purpose. This is called as failover clustering. For making this clustering manageable and lucid the corporat people spend more money on buying a licensed copy for both, the core database and the redundant database. This overhead can be avoided using a database with non-similar platform in which one database is licensed and other may be freeware. If platforms are similar then the transactions between those databases become simpler. But it won’t be the case with non-similar platforms. Hence to overcome this problem in both cases cost effective failover clustering is proposed. For designing such system a common interfacing technique between two non-similar database platforms has to be developed. This will make provision for using any two database platforms for failover clustering. Keywords: CEFC (Cost Effective Failover Clustering), DAL (Data Access Layer), DB (Database), HA (High Availability)
Using MS-SQL Server with Visual DataFlexwebhostingguy
This document discusses migrating a Visual DataFlex application from using native DataFlex databases to using Microsoft SQL Server. It provides steps to download and install SQL Server Express, load the SQL Server connectivity driver in Visual DataFlex, and use a conversion wizard to migrate tables to SQL Server. Screenshots show the conversion process and how the migrated tables appear in SQL Server Management Studio. Maintaining the migrated application is also briefly discussed.
The document provides an overview of Windows Server Update Services (WSUS) 3.0, which allows administrators to deploy and manage Microsoft product updates. Key features include improved ease of use through a new administration console, enhanced deployment options such as more flexible update approvals and server hierarchies, and improved performance. The document discusses WSUS server and client requirements, server-side features like update targeting and notifications, and deployment scenarios for single or multiple WSUS servers.
Database mirroring allows for high availability and protection of SQL Server databases. It requires at least two SQL servers - a principal database and mirror database. A witness server is also used to automate failover between the principal and mirror databases. The document outlines the implementation steps, which include preparing the servers, backing up the principal database and transaction log, restoring the backup on the mirror server, and configuring security and mirroring settings between the principal, mirror and witness servers. Once setup is complete, the databases are mirrored and failover can occur automatically using the witness server.
Geek Sync | Field Medic’s Guide to Database MirroringIDERA Software
You can watch the replay for this Geek Sync webcast in the IDERA Resource Center: http://ow.ly/eU0750A5pPs
SQL Server's Database Mirroring feature is a powerful tool and much simpler to manage than Availability Groups. Join Microsoft Certified Master Kendra Little to learn when Database Mirroring is the right choice, common rescue scenarios, and special setup required for mirrors.
This document provides information about Venkatesan Prabu Jayakantham (Venkat), the Managing Director of KAASHIV INFOTECH, a software company in Chennai. It outlines Venkat's experience in Microsoft technologies and certifications. It also details the various awards he has received throughout his career. Finally, it advertises KAASHIV INFOTECH's inplant training programs for students in fields like computer science, electronics, and mechanical engineering.
Database Mirror for the exceptional DBA – David Izahksqlserver.co.il
1. Rafael Advanced Defense Systems designs, develops, manufactures and supplies high tech defense systems for air, land, sea and space applications with sales exceeding $1.851 billion in 2010 and about 7000 employees.
2. The presentation discusses database mirroring architecture, automation, monitoring and best practices. Automation options covered include T-SQL with SQLCMD, PowerShell and linked servers.
3. Rafael chooses asynchronous database mirroring without automatic failover for high performance, with manual failover when the principal fails to avoid possible data loss from unsynchronized transactions.
This document summarizes the implementation and testing of a virtualized mid-sized Microsoft Office SharePoint Server (MOSS) 2007 farm solution using VMware Infrastructure. The solution characterized a typical physical MOSS 2007 deployment and virtualized it on the VMware platform. Performance testing showed that the virtualized solution performed on par with the physical deployment. The document provides best practices for virtualizing MOSS 2007 and leveraging the benefits of VMware Infrastructure.
This document discusses enhancements in SQL Server 2008, including data compression, backup compression, resource governor, filtered indexes, change data capture, auditing, FILESTREAM storage, policy-based management, the MERGE statement, and programmability enhancements like new data types and row constructors. It provides an overview of major new features and improvements in the SQL Server 2008 database engine.
Kaashiv SQL Server Interview Questions Presentationkaashiv1
The document introduces Mr. J. Venkatesan Prabu, the Managing Director of KAASHIV INFOTECH. It details his experience of over 8 years working with Microsoft technologies and as Project Lead at HCL Technologies in India and Australia. As a service, Venkat has contributed over 700 articles read by developers in 170 countries and conducted career guidance programs for over 20,000 students. He has received several awards including the Microsoft MVP award multiple times. The document acknowledges support from his family and team at KAASHIV INFOTECH.
A user guide that introduces a new User Interface to HPE NonStop SQL/MX DBS.
SQL/MX DBS is a solution that provides a multi-tenant database environment where the databases are isolated from each other while still sharing common resources such as compute power, storage, and network capacity. However, while the databases share the storage, each database uses dedicated, unshared, devices which prevents them from encountering database bottlenecks such as database cache and lock-space. Cache and lock space are part of the NonStop SQL Data Access Managers which are dedicated to only one database and not shared with others.
SQL Server 2008 is Microsoft's enterprise database server designed to compete with Oracle and IBM DB2. It allows users to store, retrieve, and manipulate data to meet business objectives. The document provides an overview of SQL Server 2008, including its editions, components, and new features. It describes how to set up a SQL Server instance, create databases, access and update data, and manage databases ongoing.
This document provides post-installation instructions and considerations for applying Oracle WebLogic Server Patch Set Updates (PSUs). It includes instructions for releases 12.2.1.4, 12.2.1.3, 12.1.3.0, and 10.3.6. It also provides information on minimum and recommended Java Development Kit (JDK) update levels for different WebLogic Server releases and requirements for using new JDK levels. The document advises restarting WebLogic servers after a PSU and checking for specific security, configuration, or compatibility issues.
This document provides instructions for migrating content from a SharePoint v2 or Windows Small Business Server 2003 installation to Windows SharePoint Services v3. It describes prerequisites like inventorying current data and running a prescan tool. The main steps are stopping the SharePoint v2 service, copying the databases to a new location, and attaching them to the SQL Server used for the WSS v3 installation. Custom lists, document libraries, and metadata need to be evaluated during the migration.
Virtual Solution for Microsoft SQL Serverwebhostingguy
This document describes a virtualization solution for running Microsoft SQL Server using VMware Infrastructure 3 and EMC Celerra IP Storage servers. The solution provides benefits like high availability, scalability, and maintaining performance. It ensures SQL Server environments remain available and can grow easily. Virtualizing SQL Server reduces complexity, improves flexibility and reliability of the infrastructure, and allows for simple disaster recovery and provisioning of new resources. The solution consists of VMware Infrastructure 3 for virtualization, EMC Celerra IP storage servers, Dell PowerEdge servers, and Microsoft SQL Server.
This document appears to be a scanned receipt from a restaurant called "The Grill" dated January 15th, 2023. It lists several food items purchased including a cheeseburger, fries, and a soda. The total amount due is $12.45.
This document provides an overview of basic local area network (LAN) concepts including definitions, hardware, media, and sample implementations. It defines a LAN as a group of computers and devices sharing resources within a small geographic area. Common LAN hardware includes hubs, switches, bridges, and routers which connect devices and segment traffic at different OSI model layers. Wired media include twisted pair, coaxial, and fiber optic cables while common wireless technologies are Wi-Fi and WiMax. Sample configurations show home and business LAN setups connecting devices via these components.
This document outlines a 2-hour professional development training for K-12 teachers to learn about hardware, networks, and their impact on classroom instruction. The training will include a lecture on basic computer parts and operating systems, networks and their types, and advantages of school networks. Teachers will then discuss these topics in groups and complete an assessment test. Special needs of participants will be addressed to help all teachers understand.
This document discusses various networking devices used to connect electronic devices and share resources in a computer network. It describes network interface cards (NICs) that provide the physical interface between a computer and cabling. It also covers repeaters that regenerate signals to extend distances, modems that modulate and demodulate signals for internet connections, hubs and switches that connect multiple devices either by broadcasting or selectively forwarding, bridges that segment networks while filtering traffic, and routers that intelligently connect different network types and choose optimal paths between them. The document provides details on the function and layer (physical, data link, network) of operation for each type of networking device.
Mobile-First SEO - The Marketers Edition #3XEDigitalAleyda Solís
How to target your SEO process to a reality of more people searching on mobile devices than desktop and an upcoming mobile first Google index? Check it out.
This document provides an overview of Microsoft's Windows clustering technologies, including Network Load Balancing (NLB), Component Load Balancing (CLB), and Server Cluster. It discusses how these technologies can be used together to provide high availability, reliability, and scalability for applications and services. NLB is used for load balancing web servers, CLB balances application components, and Server Cluster provides failover for databases and backend services. Clusters can be organized as farms of identical servers or packs that share partitioned data. Technologies can scale out by adding servers or scale up individual servers. Server Cluster nodes can operate actively or passively.
This document provides instructions for creating a two-node server cluster using Windows Server 2003. Key steps include installing Windows Server 2003 on each node, setting up two network adapters per node on separate subnets for public and private communication, configuring shared storage accessible by both nodes, and installing the Cluster Service software while ensuring only one node has access to shared storage at a time. The document outlines hardware and software requirements and provides a checklist to prepare for cluster installation and configuration.
Sql Server 2014 Platform for Hybrid Cloud Technical Decision Maker White PaperDavid J Rosenthal
The document discusses options for running SQL Server in hybrid cloud environments, including both public and private clouds. In a public cloud, SQL Server can run in either Windows Azure Virtual Machines, which provides full feature parity with on-premises SQL Server, or Windows Azure SQL Database, which offers scalability to millions of users but less control over the operating system. A hybrid approach allows organizations to deploy applications across on-premises and cloud environments to realize the benefits of each.
Database mirroring allows for high availability and protection of SQL Server databases. It requires at least two SQL servers - a principal database and mirror database. A witness server is also used to automate failover between the principal and mirror databases. The document outlines the implementation steps, which include preparing the servers, backing up the principal database and transaction log, restoring the backup on the mirror server, and configuring security and mirroring settings between the principal, mirror and witness servers. Once setup is complete, the databases are mirrored and failover can occur automatically using the witness server.
Geek Sync | Field Medic’s Guide to Database MirroringIDERA Software
You can watch the replay for this Geek Sync webcast in the IDERA Resource Center: http://ow.ly/eU0750A5pPs
SQL Server's Database Mirroring feature is a powerful tool and much simpler to manage than Availability Groups. Join Microsoft Certified Master Kendra Little to learn when Database Mirroring is the right choice, common rescue scenarios, and special setup required for mirrors.
This document provides information about Venkatesan Prabu Jayakantham (Venkat), the Managing Director of KAASHIV INFOTECH, a software company in Chennai. It outlines Venkat's experience in Microsoft technologies and certifications. It also details the various awards he has received throughout his career. Finally, it advertises KAASHIV INFOTECH's inplant training programs for students in fields like computer science, electronics, and mechanical engineering.
Database Mirror for the exceptional DBA – David Izahksqlserver.co.il
1. Rafael Advanced Defense Systems designs, develops, manufactures and supplies high tech defense systems for air, land, sea and space applications with sales exceeding $1.851 billion in 2010 and about 7000 employees.
2. The presentation discusses database mirroring architecture, automation, monitoring and best practices. Automation options covered include T-SQL with SQLCMD, PowerShell and linked servers.
3. Rafael chooses asynchronous database mirroring without automatic failover for high performance, with manual failover when the principal fails to avoid possible data loss from unsynchronized transactions.
This document summarizes the implementation and testing of a virtualized mid-sized Microsoft Office SharePoint Server (MOSS) 2007 farm solution using VMware Infrastructure. The solution characterized a typical physical MOSS 2007 deployment and virtualized it on the VMware platform. Performance testing showed that the virtualized solution performed on par with the physical deployment. The document provides best practices for virtualizing MOSS 2007 and leveraging the benefits of VMware Infrastructure.
This document discusses enhancements in SQL Server 2008, including data compression, backup compression, resource governor, filtered indexes, change data capture, auditing, FILESTREAM storage, policy-based management, the MERGE statement, and programmability enhancements like new data types and row constructors. It provides an overview of major new features and improvements in the SQL Server 2008 database engine.
Kaashiv SQL Server Interview Questions Presentationkaashiv1
The document introduces Mr. J. Venkatesan Prabu, the Managing Director of KAASHIV INFOTECH. It details his experience of over 8 years working with Microsoft technologies and as Project Lead at HCL Technologies in India and Australia. As a service, Venkat has contributed over 700 articles read by developers in 170 countries and conducted career guidance programs for over 20,000 students. He has received several awards including the Microsoft MVP award multiple times. The document acknowledges support from his family and team at KAASHIV INFOTECH.
A user guide that introduces a new User Interface to HPE NonStop SQL/MX DBS.
SQL/MX DBS is a solution that provides a multi-tenant database environment where the databases are isolated from each other while still sharing common resources such as compute power, storage, and network capacity. However, while the databases share the storage, each database uses dedicated, unshared, devices which prevents them from encountering database bottlenecks such as database cache and lock-space. Cache and lock space are part of the NonStop SQL Data Access Managers which are dedicated to only one database and not shared with others.
SQL Server 2008 is Microsoft's enterprise database server designed to compete with Oracle and IBM DB2. It allows users to store, retrieve, and manipulate data to meet business objectives. The document provides an overview of SQL Server 2008, including its editions, components, and new features. It describes how to set up a SQL Server instance, create databases, access and update data, and manage databases ongoing.
This document provides post-installation instructions and considerations for applying Oracle WebLogic Server Patch Set Updates (PSUs). It includes instructions for releases 12.2.1.4, 12.2.1.3, 12.1.3.0, and 10.3.6. It also provides information on minimum and recommended Java Development Kit (JDK) update levels for different WebLogic Server releases and requirements for using new JDK levels. The document advises restarting WebLogic servers after a PSU and checking for specific security, configuration, or compatibility issues.
This document provides instructions for migrating content from a SharePoint v2 or Windows Small Business Server 2003 installation to Windows SharePoint Services v3. It describes prerequisites like inventorying current data and running a prescan tool. The main steps are stopping the SharePoint v2 service, copying the databases to a new location, and attaching them to the SQL Server used for the WSS v3 installation. Custom lists, document libraries, and metadata need to be evaluated during the migration.
Virtual Solution for Microsoft SQL Serverwebhostingguy
This document describes a virtualization solution for running Microsoft SQL Server using VMware Infrastructure 3 and EMC Celerra IP Storage servers. The solution provides benefits like high availability, scalability, and maintaining performance. It ensures SQL Server environments remain available and can grow easily. Virtualizing SQL Server reduces complexity, improves flexibility and reliability of the infrastructure, and allows for simple disaster recovery and provisioning of new resources. The solution consists of VMware Infrastructure 3 for virtualization, EMC Celerra IP storage servers, Dell PowerEdge servers, and Microsoft SQL Server.
This document appears to be a scanned receipt from a restaurant called "The Grill" dated January 15th, 2023. It lists several food items purchased including a cheeseburger, fries, and a soda. The total amount due is $12.45.
This document provides an overview of basic local area network (LAN) concepts including definitions, hardware, media, and sample implementations. It defines a LAN as a group of computers and devices sharing resources within a small geographic area. Common LAN hardware includes hubs, switches, bridges, and routers which connect devices and segment traffic at different OSI model layers. Wired media include twisted pair, coaxial, and fiber optic cables while common wireless technologies are Wi-Fi and WiMax. Sample configurations show home and business LAN setups connecting devices via these components.
This document outlines a 2-hour professional development training for K-12 teachers to learn about hardware, networks, and their impact on classroom instruction. The training will include a lecture on basic computer parts and operating systems, networks and their types, and advantages of school networks. Teachers will then discuss these topics in groups and complete an assessment test. Special needs of participants will be addressed to help all teachers understand.
This document discusses various networking devices used to connect electronic devices and share resources in a computer network. It describes network interface cards (NICs) that provide the physical interface between a computer and cabling. It also covers repeaters that regenerate signals to extend distances, modems that modulate and demodulate signals for internet connections, hubs and switches that connect multiple devices either by broadcasting or selectively forwarding, bridges that segment networks while filtering traffic, and routers that intelligently connect different network types and choose optimal paths between them. The document provides details on the function and layer (physical, data link, network) of operation for each type of networking device.
Mobile-First SEO - The Marketers Edition #3XEDigitalAleyda Solís
How to target your SEO process to a reality of more people searching on mobile devices than desktop and an upcoming mobile first Google index? Check it out.
This document provides an overview of Microsoft's Windows clustering technologies, including Network Load Balancing (NLB), Component Load Balancing (CLB), and Server Cluster. It discusses how these technologies can be used together to provide high availability, reliability, and scalability for applications and services. NLB is used for load balancing web servers, CLB balances application components, and Server Cluster provides failover for databases and backend services. Clusters can be organized as farms of identical servers or packs that share partitioned data. Technologies can scale out by adding servers or scale up individual servers. Server Cluster nodes can operate actively or passively.
This document provides instructions for creating a two-node server cluster using Windows Server 2003. Key steps include installing Windows Server 2003 on each node, setting up two network adapters per node on separate subnets for public and private communication, configuring shared storage accessible by both nodes, and installing the Cluster Service software while ensuring only one node has access to shared storage at a time. The document outlines hardware and software requirements and provides a checklist to prepare for cluster installation and configuration.
Sql Server 2014 Platform for Hybrid Cloud Technical Decision Maker White PaperDavid J Rosenthal
The document discusses options for running SQL Server in hybrid cloud environments, including both public and private clouds. In a public cloud, SQL Server can run in either Windows Azure Virtual Machines, which provides full feature parity with on-premises SQL Server, or Windows Azure SQL Database, which offers scalability to millions of users but less control over the operating system. A hybrid approach allows organizations to deploy applications across on-premises and cloud environments to realize the benefits of each.
Cluster service is a Windows technology that enables connecting multiple servers into clusters for high availability and manageability. It provides failover support for applications and services requiring high availability. This white paper focuses on the architecture and features of Cluster service, describing its key components, terminology, concepts, and design goals. It also outlines future directions for Cluster service.
This document provides an overview of database availability groups (DAGs) in Exchange Server 2010. It defines what a DAG is and explains some of its key components and functions, including mailbox databases, copies, the active manager, continuous replication, log shipping, and failover processes. It also briefly reviews some clustering basics such as quorum, standard quorum, and majority node set quorum to provide context before delving into the specifics of DAGs.
The document provides an introduction to Java Enterprise Edition (Java EE). It discusses key concepts such as distributed systems, middleware, the Java EE platform, and Java EE application servers. The Java EE platform consists of the Java SE APIs, Java EE APIs, and a Java EE application server. Applications are built using Java EE components like EJBs and servlets that run within a managed environment provided by the application server.
Getting the most out of your cloud deploymentJose Lopez
Organizations are under ever increasing pressure to reduce IT costs. One of the cost reduction methods is to move applications into the cloud. Cloud-based computing provides organizations with cost savings over time, instead of the upfront costs associated with traditional datacenter equipment. By moving from a capital expenditure (CAPEX) model to an operational expenditure (OPEX) model, organizations are able to reduce costs associated with hardware and cooling in datacenters. There are other benefits like increased productivity based on reduced administrative burden, a result of the self-service nature of cloud computing, and the ability to almost instantly provision to meet service requirements.
When moving an application into the cloud, there are design considerations that must be taken into account. Cloud computing architecture is quite different compared to traditional physical or virtual environments. For example Amazon Web Services (AWS) persistent storage is disassociated from the Amazon Machine Image (AMI), and virtual machine instances are simply disposable.
This document provides information about inplant training programs offered by KAASHIV INFOTECH in Chennai, India. It outlines 5-day training schedules for students of CSE/IT/MCA, ECE/EEE, and Mechanical/Civil engineering. The CSE/IT/MCA schedule focuses on topics like Big Data, app development, ethical hacking, and cloud computing. The ECE/EEE schedule covers embedded systems, wireless systems, and CCNA networking. The mechanical/civil schedule includes aircraft design, vehicle movement, and 3D modeling and packaging. The training is handled by professionals and aims to equip students with strong technical skills.
Microsoft SQL High Availability and ScalingJustin Whyte
This is a white paper that I wrote that explores the different options for Microsoft SQL Server High Availability. I explain the pros and cons of each technology, as well as important considerations like RTO and RPO.
The document discusses major design issues in cloud computing operating systems and techniques to mitigate them. It outlines issues like providing sufficient APIs, security, trust, confidentiality and privacy. To address these, a cloud OS needs to design abstract interfaces following open standards for interoperability. It also needs mechanisms like trusted third parties to establish trust dynamically between systems. The OS must allow for multitenancy while preventing confidentiality breaches through techniques like limiting residual data.
Clustering technologies in Windows Server 2003 help achieve high availability and scalability for critical applications. Server clusters provide high availability by redistributing workloads from failed servers, while Network Load Balancing provides scalability and availability for web services by load balancing requests across multiple servers. The choice depends on whether applications have long-running in-memory state, with server clusters intended for stateful applications like databases and NLB for stateless applications like web servers.
The document provides an overview of Microsoft's Azure Services Platform, which includes four main components: Windows Azure, .NET Services, SQL Services, and Live Services. Windows Azure provides a platform for building and hosting applications in the cloud, .NET Services offers distributed infrastructure services, SQL Services provides data storage and services in the cloud, and Live Services allows accessing and synchronizing data from Microsoft's online applications.
This document provides a summary of 50 common Azure interview questions and answers, organized into basic, general, and experienced categories. It begins by defining common Azure terms like Microsoft Azure, Azure diagnostics, cloud computing, PaaS, SaaS, and IaaS. It then covers questions about Azure roles, deployment models, services, scaling, and advantages of cloud computing. More advanced questions address Azure VM sizes, table storage, repositories, lookups, and SQL Azure database types. The document aims to help candidates prepare for Azure technical interviews.
Computing And Information Technology Programmes EssayLucy Nader
The document discusses proposed solutions to improve the ICT infrastructure of Global Water Company. It identifies problems with the current infrastructure, which includes separate local networks and servers at each of the company's three prime locations, relying on public networks for digital communication between locations. The proposed solution aims to improve communications issues by implementing an updated ICT infrastructure within the ICT department to better support the company's rapid growth over the past decade. The solution will demonstrate how both business and technical goals can be achieved within the given budget.
This document is a thesis on lock-in issues with Platform as a Service (PaaS) cloud computing. Chapter 1 provides background on cloud computing, defining it as on-demand access to shared computing resources over a network. It describes the benefits of cloud computing like reduced costs and ability to quickly scale resources. It outlines the three service models of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Chapter 2 will discuss the problem of vendor lock-in that exists in cloud computing, particularly in PaaS, and the issues it causes for users.
The document discusses the implementation of an information system at a children's hospital in Los Angeles. It describes some of the key purposes and components of a hospital information system, including managing administrative, financial, and clinical data in both paper-based and digital formats. Specifically, the system implemented at this hospital involved purchasing Microsoft software and storing all patient information, doctor reports, and other data in a relational database for easy access and integration across the hospital. An estimated budget and hours for various roles needed such as system analysts, programmers, and database specialists is also provided.
This document discusses cloud computing, including its pros and cons for pharmaceutical companies. It describes the three types of cloud services - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). While cloud computing provides benefits like elastic resources and lower costs, it also poses risks around data security, regulatory compliance, and loss of control. The document analyzes specific cloud applications for molecular modeling and considers how cloud computing could be applicable in the pharmaceutical industry.
This document discusses cloud computing, including its pros and cons for pharmaceutical companies. It describes the three types of cloud services - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). While cloud computing provides benefits like elastic resources and lower costs, it also poses risks around data security, regulatory compliance, and loss of control. The document analyzes specific cloud applications for molecular modeling and considers how cloud computing could be applicable in the pharmaceutical industry.
This document describes a deployment of the Olio web application on Sun servers and OpenSolaris to demonstrate scalability. The solution uses Sun Fire servers for the load drivers, web and caching tier, and database tier. The web tier runs Apache HTTP Server, PHP and Memcached. The database uses MySQL replication across three servers. Testing showed the deployment could handle 10,000 concurrent users with good response times. Scaling and best practices are discussed.
Similar to Cluster configuration best practices (20)
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
1. Server Clusters : Cluster Configuration Best Practices
For Windows Server 2003
Microsoft Corporation
Published: January 2003
Abstract
This document describes several recommended cluster configurations, or topologies, among the many that are
possible for Microsoft® Windows® Server 2003. Each configuration is accompanied by a discussion of the relative
advantages and disadvantages for various models of application deployment.
3. Microsoft® Windows® Server 2003 Technical Article
Contents
Contents
................................................................................................................................................................ iii
Cluster and Application Deployments..................................................................................................4
Server Load............................................................................................................................................. 6
Server Load – The Basics.....................................................................................................................6
Application Style..................................................................................................................................10
Application Deployments.....................................................................................................................16
Failover Policies..................................................................................................................................17
Failover Pairs...................................................................................................................................17
Hot-Standby Server.........................................................................................................................18
N+I...................................................................................................................................................19
Failover Ring ..................................................................................................................................20
Random........................................................................................................................................... 21
Customized control..........................................................................................................................22
Load Balancing.....................................................................................................................................24
Server clusters: Cluster Configuration Best Practices iii
4. Microsoft® Windows® Server 2003 Technical Article
Cluster and Application Deployments
Historically, we have described clusters and applications as being active/active or active/passive, which is
a natural result of the fact that the earliest products supported only 2-node clusters. In some cases,
however, this has led to confusion because the terms can be interpreted in different ways depending on
whether the context is 1) how a given application instance is running, 2) how different instances of an
application are running or 3) whether the nodes in the cluster are performing useful work. To fully
understand the specific application and cluster deployments, we need to understand the following:
• How many instances of the application are actually up and running?
• How many instances of the same data are in the cluster?
• Do instances of the data move around?
• Do different instances of the same application run on different cluster nodes?
• Can different applications run on different cluster nodes?
• What kind of load does the application put on a server, and how will this load be re-distributed
after a failover?
Before describing how an application is deployed, we must first define what an application is. An
application, for the purposes of this document, is defined as the running code and data required to
provide a single service to an end-user or client. Consider a couple of different examples to illustrate the
point: A single instance of Microsoft®
Word running on a workstation is a single application instance. If
there are multiple instances of Word running, each instance is considered a different application instance.
There are, however, more complex applications. For example, with Microsoft SQL Server™, a single
database is considered to be a single application instance. Independent databases are considered to be
different application instances. A single database, however, can be partitioned into multiple SQL Server
instances and “tied-together” using the SQL Server query engine. In this case, the set of SQL Server
instances that are tied together to provide a single database image is considered a single application
instance.
Fundamentally, there are five different attributes that together provide a complete view of the deployment
and can be used to reason about or characterize the deployment:
• Server load – How much of a server’s resources are consumed by the applications it
supports, and how will this resource utilization be affected by failover?
• Application style – Is the application a single monolithic application that runs on a single node
or is the complete application split into smaller pieces that can be deployed around the
cluster?
Server Clusters: Cluster Configuration Best Practices 4
5. Microsoft® Windows® Server 2003 Technical Article
• Application deployment – How are the pieces of the application spread around the cluster in a
given deployment?
• Failover policies – How is the application configured to behave after a failover?
• Application implementation – How is the application itself implemented?
Server Clusters: Cluster Configuration Best Practices 5
6. Microsoft® Windows® Server 2003 Technical Article
Server Load
When deploying an application, it is always important to consider what demands it will make on a server’s
resources. With clustering, there is a related issue that also needs to be taken into account: how is the
load re-distributed after a failover?
Server Load – The Basics
Consider one of the simplest cases, an active/active, 2-node file server cluster, with node A and node B
each serving a single share. If node A fails, its resources will move to node B, placing an additional load
on node B. In fact, if node A and B were each running at only 50% capacity before the failure, node B will
be completely saturated (100% of capacity) after the failover is completed, and performance may suffer.
While this situation may not be optimal, it is important to remember that having all of the applications still
running, even in a reduced performance scenario, is a 100% improvement over what you would have
without the high availability protection that clusters provide. But this does bring up the notion of risk, and
what amount of it you are willing to accept in order to protect the performance, and ultimately the
availability, of your applications.
We have intentionally chosen the worst case (an active/active, 2-node cluster with each node running a
single application that consumes half of the server’s resources) for the purpose of clarity. With an
additional node, the equation changes: there are more servers to support the workload, but if all three
nodes are running at 50% capacity and there are two failures, the single remaining server will simply not
be able to handle the accumulated load of the applications from both of the failed servers. Of course, the
likelihood of two failures is considerably less than that of a single failure, so the risk is mitigated
somewhat.
Nevertheless, the load/risk tradeoff must be considered when deploying applications on a cluster. The
more nodes in a cluster, the more options you have for distributing the workload. If your requirements
dictate that all of your clustered applications must run with no performance degradation, then you may
need to consider some form of active/passive configuration. But even in this scenario, you must consider
the risks of the various configurations. If you cannot accept even the slightest risk of any reduced
performance under any conditions whatsoever, you will need a dedicated passive node for each active
node.
If, on the other hand, you are convinced that the risk of multiple failures is small, you have other choices.
If you have a 4-node, or 8-node cluster, you may want to consider an “N+I” configuration. N+I, which is
discussed in more detail in section 1.4.3, is a variant of active/passive, where N nodes are active, and I
nodes are passive, or reserve nodes. Typically, the value for I is less than the value for N, and an N+I
cluster topology can handle I failures before any performance degradation is likely. The risk is that with
Server Clusters: Cluster Configuration Best Practices 6
7. Microsoft® Windows® Server 2003 Technical Article
more than I failures, performance will likely decline, but once again, the likelihood of multiple failures is
increasingly remote.
For this reason, N+I clusters are a useful configuration that balances the hardware cost of having 100%
passive server capacity against the low level risk of multiple cluster node failures.
Server Load – Some More Realistic Configurations
The scenarios above were intentionally simplistic, assuming that one application imposed a monolithic
load on each server, and thus its resource utilization could not be spread among more than one other
server in the event of a failover. That is often not the case in the real world, especially for file and print
servers, so we will take a look at some additional scenarios with a 4-node cluster, named ABCD, and
having nodes A, B, C, and D.
Typically a single server will support the load of more than one application. If under normal conditions,
each server was loaded at 25%, then the ABCD cluster could survive the loss of three members before a
likely loss of application availability, which would be nearly a worst-case scenario.
The following series of figures illustrates what would happen with the application load in a 4-node cluster
for successive node failures. The shaded, or patterned, areas indicate the capacity demands of the
running applications. Further, the example below assumes that the application load on any given server
is divisible, and can be redistributed among any of the surviving nodes.
Figure 1.1: Cluster under normal operating conditions (each node loaded at 25%)
Server Clusters: Cluster Configuration Best Practices 7
8. Microsoft® Windows® Server 2003 Technical Article
Figure 1.2: Cluster after a single node failure. Note redistribution of application load.
Figure 1.3: Cluster after two node failures. Each surviving node is now approximately 50%
loaded.
Figure 1.4: After three node failures, single surviving node is at full capacity.
If each node were running at 75% capacity, then without sensible failover policies, even a single node
failure could result in loss of application availability. Depending on the application(s), however, you can
specify that, in the event of a server failure, some percentage of the applications should fail over to node
A, node B, node C, and node D. If the applications are spread evenly among the surviving nodes, then
this cluster could now survive the loss of a single machine, because one third of the failed server’s load
(one third of 75% is 25%) is allocated to each of the surviving three machines. The result is three fully
loaded servers (each of the nodes running at 75% capacity now have an additional 25%), but all
applications are still available.
As a variation on the example that was illustrated previously, the following series of figures will depict
what happens to a 4-node cluster, each of which is running at approximately 33% capacity. Further, in
Server Clusters: Cluster Configuration Best Practices 8
9. Microsoft® Windows® Server 2003 Technical Article
this case, the application load is indivisible, and can not be spread among multiple other servers in the
event of a failover (perhaps it is a single application or multiple applications that depend on the same
resource).
Figure 2.1: Cluster under normal operating conditions (each node loaded at approximately 33%)
Figure 2.2: Cluster after a single node failure. Note redistributed application load.
Figure 2.3: Cluster after second node failure. Each surviving node is now running at
approximately 66% capacity. Note that in the event of another node failure, this cluster will no
longer be capable of supporting all four of these applications.
Figure 2.4: After third failure, the single surviving server can only support three of the four
applications.
Server Clusters: Cluster Configuration Best Practices 9
10. Microsoft® Windows® Server 2003 Technical Article
Another style of failover policy is “Failover Pairs”, also known as “Buddy Pairs”. Assuming each of the
four servers is loaded at 50% or less, a failover “buddy” can be associated with each machine. This
allows a cluster in this configuration to survive two failures. More details on Failover Pairs can be found in
section 1.4.1.
Taking the previous Failover Pair example, we can convert it to an active/passive configuration by loading
two servers at 100% capacity, and having two passive backup servers. This active/passive configuration
can also survive two failures, but note that under ordinary circumstances, two servers remain unused.
Furthermore, performance of these servers at 100% load is not likely to be as good as with the Failover
Pair configuration, where each machine is only running at 50% utilization. Between these two examples,
note that you have the same number of servers, the same number of applications, and the same
survivability (in terms of how many nodes can fail without jeopardizing application availability). However,
the Failover Pair configuration clearly comes out ahead of active/passive in terms of both performance
and economy.
Still using our ABCD 4-node cluster, consider what happens if we configure it as an N+I cluster (explained
in more detail in section 1.4.3) where three nodes are running at 100% capacity, and there is a single
standby node. This cluster can only survive a single failure. As before, however, comparing it to the
example where each server is running at 75% capacity, you again have the same number of servers,
applications, and same survivability, but the performance and economy can suffer when you have passive
servers backing up active servers running at 100% load.
Application Style
Now it is time to consider some aspects of the application’s design, and how that affects the way it is
deployed. Applications running in a cluster can be characterized as one of:
• Single Instance
In a single instance application, only one application instance is running in the cluster at any
point in time. An example of this is the DHCP service in a server cluster. At any point in time,
there is only one instance of the DHCP service running in a cluster. The service is made
highly available using the failover support provided with server clusters. Single instance
applications typically have state that cannot be partitioned across the nodes. In the case of
DHCP, the set of IP addresses that have been leased is relatively small, but potentially highly
volatile in a large environment. To avoid the complexities of synchronizing the state around
the cluster, the service runs as a single instance application.
Server Clusters: Cluster Configuration Best Practices 10
App
Only one instance of the
application is running in the
cluster at any one time
Cluster
11. Microsoft® Windows® Server 2003 Technical Article
Figure 3: Single instance application
In the example above, the cluster has four nodes. The single instance application, by definition,
can be running on only one node at a time.
• Multiple Instance
A multiple instance application is one where multiple instances of the same code or different
pieces of code that cooperate to provide a single service can be executing around the cluster.
Together these instances provide the illusion of a single service to an end-user or client
computer. There are two different types of multiple instance applications that depend on the
type of application and data:
Server Clusters: Cluster Configuration Best Practices 11
12. Microsoft® Windows® Server 2003 Technical Article
Cloned Application. A cloned application is a single logical application that consists of
two or more instances of the same code running against the same data set. Each
instance of the application is self-contained, thus enabling a client to make a request to
any instance of the application with the guarantee that regardless of the instance, the
client will receive the same result. A cloned application is scalable since additional
application instances can be deployed to service client requests and the instances can be
deployed across different nodes, thus enabling an application to grow beyond the
capacity of a single node. By deploying application instances on different nodes, the
application is made highly available. In the event that a node hosting application
instances fails, there are instances running on other nodes that are still available to
service client requests.
Typically, client requests are load-balanced across the nodes in the cluster to spread the
client requests amongst the application instances. Applications that run in this environment
do not have long-running in-memory state that spans client requests, since each client
request is treated as an independent operation and each request can be load-balanced
independently. In addition, the data set must be kept consistent across the entire application.
As a result, cloning is a technique typically used for applications that have read-only data or
data that is changed infrequently. Web server front-ends, scalable edge-servers such as
arrays of firewalls and middle-tier business logic fall into this category.
While these applications are typically called stateless applications1
they can save client-
specific session-oriented state in a persistent store that is available to all instances of the
cloned application, however, the client must be given a token or a key that it can present with
subsequent requests so that whichever application instance services each request can
associate the request with the appropriate client state.
The Microsoft Windows®
platform provides Network Load Balancing as the basic
infrastructure for building scale-out clusters with the ability to spray client requests across the
nodes. Application Center provides a single image view of management for these
environments.
1
This is really a misnomer, the applications have state, however, the state does not span individual client requests.
Server Clusters: Cluster Configuration Best Practices 12
A-Z
App
Cluster
App App App
A-Z A-Z A-Z
13. Microsoft® Windows® Server 2003 Technical Article
Figure 4: Cloned Application
In the example above, the same application “App” is running on each node. Each instance of
the application is accessing the same data set, in this case the dataset A-Z. This example
show each instance having access to its own data set (created and kept consistent using
some form of staging or replication technique), some applications can share the same data
set (where the data is available to the cluster using a file share for example).
Figure 5: Cloned Application using a file share
Partitioned Applications. Applications that have long-running in-memory state or have
large, frequently updated data sets cannot be easily cloned. These applications are
typically called stateful applications. The cost of keeping the data consistent across many
instances of the application would be prohibitive.
Server Clusters: Cluster Configuration Best Practices 13
A-Z
App
Cluster
App App App
File Server
sharing out the
data set A-Z
14. Microsoft® Windows® Server 2003 Technical Article
Fortunately, however, many of these applications have data sets or functionality that can be
readily partitioned. For example, a large file server can be partitioned by dividing the files
along the directory structure hierarchy or a large customer database can be partitioned along
customer number or customer name boundaries (customers from A to L in one database,
customers from M to Z in another database, for example). In other cases, the functionality
can be partitioned or componentized. Once an application is partitioned, the different
partitions can be deployed across a set of nodes in a cluster, thus enabling the complete
application to scale beyond the bounds of a single node. In order to present a single
application image to the clients, a partitioned application requires an application-dependent
decomposition, routing and aggregation mechanism that allows a single client request to be
distributed across the set of partitions and the results from the partitions to be combined into
a single response back to the client. For example, take a partitioned customer database, a
single SQL query to return all of the accounts that have overdue payments requires that the
query be sent to every partition of the database. Each partition will contain a subset of
customer records that must be combined into a single data set to be retuned to the client.
Partitioning an application allows it to scale but does not provide high availability. If a node
fails that is hosting a partition of the application, that piece of the data set or that piece of
functionality is no longer accessible.
Partitioning is typically application-specific, since the aggregation mechanism is dependent of
the type of application and the type of data returned. SQL Server, Exchange data stores and
DFS are all examples of applications and services that can be partitioned for scalability.
Figure 6: Partitioned Application – Data Partitioning
In the example above, each node is running multiple instances of the same application
against different pieces of the complete data set. A single request from a client application
can span multiple instances, for example a query to return all the records in the database.
This splitting of the client across the different application instances and the aggregation of the
Server Clusters: Cluster Configuration Best Practices 14
A-F
App
Cluster
App App App
G-M N-R S-Z
15. Microsoft® Windows® Server 2003 Technical Article
single result passed back to the client can be done either by the applications on the server
cooperating amongst themselves (for example the SQLQuery engine) or it can be done by
the client (as is the case with Exchange 2000 data stores).
Applications may also be partitioned along functional lines as well as data sets.
Figure 7: Partitioned Application - Functional Partitions
In the above example, each node in the cluster is performing a different function; however,
the cluster together provides a single, uniform service. One node is providing a catalog
service; one is providing a billing service, etc. The cluster, though, provides a single “book
buying service” to clients.
Computational clusters are built to support explicitly massively parallel applications. In other
words, the applications are specifically written to decompose a problem into many (potentially
thousands) of sub-operations and execute them in parallel across a set of machines. This
type of application is also a partitioned application. Computational clusters typically provide a
set of libraries to provide cluster-wide communication and synchronization primitives tailored
Server Clusters: Cluster Configuration Best Practices 15
Book
Service
Cluster
Catalog Billing
Credit
Check
Client Requests
16. Microsoft® Windows® Server 2003 Technical Article
to the environment (MPI is one such set of libraries). These clusters have been termed
Beowulf clusters. Microsoft has created the High Performance Computing initiative to provide
support for this type of cluster.
Application Deployments
Applications can be deployed in different ways across a cluster depending on what style of application is
being deployed and the number of applications that are being deployed on the same cluster. They can be
characterized as follows:
1) One, single instance application
In this type of deployment, one node of the cluster is executing the application so that it can
service user requests, the other nodes are in stand-by, waiting to host the application in the
event of failure. This type of deployment is typically suited to 2-node failover clusters and is
not typically used for N-node clusters since only 1/N of the total capacity is being used. This
is what some people term an active/passive cluster.
2) Several, single instance applications
Several single instance applications can be deployed on the same cluster. Each application is
independent from the others. In a failover cluster, each application can failover independently
for the others, so in the case of a node hosting multiple applications, if that node fails, the
different applications may failover to different nodes in the cluster. This type of deployment is
typically used for consolidation scenarios where multiple applications are hosted on a set of
nodes and the cluster provides a highly available environment. In this environment, careful
capacity planning is required to ensue that in the event of failures, the nodes have sufficient
capacity (CPU, memory IO bandwidth etc.) to support the increased load.
3) A single, multiple instance application
In this environment, while the cluster is only supporting one application, various pieces of the
application are running on the different nodes. In the case of a cloned application, each node
is running the same code against the same data. This is the typical web-front end scenario
(all of the nodes in the cluster are identical). In the event of a failure, the capacity is reduced.
In a partitioned application, the individual partitions are typically deployed across the nodes in
the cluster. If a failure occurs, multiple partitions may be hosted on a single node. Careful
planning is required to ensure that, in the event of a failure, the application SLA is achieved.
Take a 4-node cluster as an example. If an application is partitioned into four pieces, then in
normal running, each node would host a partition. In the event that one node failed, two
nodes would continue to host one partition and the remaining node would end up hosting two
partitions, giving the 3rd
node twice the load, potentially. If the application were split into 12
partitions, each node would host three partitions in the normal case. In the event of a failure,
each partition could be configured to failover to a different node, thereby; the remaining
nodes could each host four partitions, spreading the load in the event of a failure. The cost
however, is that support for 12 partitions may be more overhead than for four partitions.
Server Clusters: Cluster Configuration Best Practices 16
17. Microsoft® Windows® Server 2003 Technical Article
4) Several, multiple instance applications
Of course, several multiple instance applications may be deployed on the same cluster
(indeed single instance and multiple instances may be deployed on the same cluster). In the
case of multiple cloned applications, each node is simply running one instance of each
application. In the case of a partitioned application, capacity planning as well as defining
failover targets becomes increasingly complex.
Failover Policies
Failover is the mechanism that single instance applications and the individual partitions of a partitioned
application typically employ for high availability (the term Pack has been coined to describe a highly
available, single instance application or partition).
In a 2-node cluster, defining failover policies is trivial. If one node fails, the only option is to failover to the
remaining node. As the size of a cluster increases, different failover policies are possible and each one
has different characteristics.
Failover Pairs
In a large cluster, failover policies can be defined such that each application is set to failover between two
nodes. The simple example below shows two applications App1 and App2 in a 4-node cluster.
Figure 8: Failover pairs
This configuration has pros and cons:
Server Clusters: Cluster Configuration Best Practices 17
Node 4
Node 1
App1
Node 2
Node 3
Cluster
App2
App2 can
Failover to Node
4 in the event of
failure of Node 3
App1 can
Failover to Node
2 in the event of
failure of Node 1
18. Microsoft® Windows® Server 2003 Technical Article
Good for clusters that are supporting heavy-weight2
applications, such as databases.
This configuration ensures that in the event of failure, two applications will not be hosted
on the same node.
Very easy to plan capacity. Each node is sized based on the application that it will need
to host (just like a 2-node cluster hosting one application).
Effect of a node failure on availability and performance of the system is very easy to
determine.
Get the flexibility of a larger cluster. In the event that a node is taken out for maintenance,
the “buddy” for a given application can be changed dynamically (may end up with
standby policy below).
In simple configurations, such as the one above, only 50% of the capacity of the cluster is
in use.
Administrator intervention may be required in the event of multiple failures.
Failover pairs are supported by server clusters on all versions of Windows by limiting the possible owner
list for each resource to a given pair of nodes.
Hot-Standby Server
To reduce the overhead of failover pairs, the “spare” node for each pair may be consolidated into a single
node, providing a hot standby server that is capable of picking up the work in the event of a failure.
Figure 9: Standby Server
The standby server configuration has pros and cons:
2
A heavy-weight application is one that consumes a significant number of system resources such as CPU, memory
or IO bandwidth.
Server Clusters: Cluster Configuration Best Practices 18
Node 1
App1
Node 2
Node 3
Cluster
App2
App2 can
Failover to Node
2 in the event of
failure of Node 3
App1 can
Failover to Node
2 in the event of
failure of Node 1
19. Microsoft® Windows® Server 2003 Technical Article
Good for clusters that are supporting heavy-weight applications such as databases. This
configuration ensures that in the event of a single failure, two applications will not be
hosted on the same node.
Very easy to plan capacity. Each node is sized based on the application that it will need
to host, the spare is sized to be the maximum of the other nodes.
Effect of a node failure on availability and performance of the system is very easy to
determine.
Configuration is targeted towards a single point of failure.
Does not really handle multiple failures well. This may be an issue during scheduled
maintenance where the spare may be in use.
Server clusters support standby servers today using a combination of the possible owners list and the
preferred owners list. The preferred node should be set to the node that the application will run on by
default and the possible owners for a given resource should be set to the preferred node and the spare
node.
N+I
Standby server works well for 4-node clusters in some configurations, however, its ability to handle
multiple failures is limited. N+I configurations are an extension of the standby server concept where there
are N nodes hosting applications and I nodes which are spares.
Figure 10: N+I Spare node configuration
N+I configurations have the following pros and cons:
Server Clusters: Cluster Configuration Best Practices 19
Node 1
App1
Node 2 Node 3
Cluster
App2
Any application
instance can
failover to any
spare node
Node 4 Node 5 Node 6
App3 App4
20. Microsoft® Windows® Server 2003 Technical Article
Good for clusters that are supporting heavy-weight applications such as databases or
Exchange. This configuration ensures that in the event of a failure, an application
instance will failover to a spare node, not one that is already in use.
Very easy to plan capacity. Each node is sized based on the application that it will need
to host.
Effect of a node failure on availability and performance of the system is very easy to
determine.
Configuration works well for multiple failures.
Does not really handle multiple applications running in the same cluster well. This policy
is best suited to applications running on a dedicated cluster.
Server clusters supports N+I scenarios in the Windows Server 2003 release using a cluster group public
property AntiAffinityClassNames. This property can contain an arbitrary string of characters. In the
event of a failover, if a group being failed over has a non-empty string in the AntiAffinityClassNames
property, the failover manager will check all other nodes. If there are any nodes in the possible owners list
for the resource that are NOT hosting a group with the same value in AntiAffinityClassNames, then
those nodes are considered a good target for failover. If all nodes in the cluster are hosting groups that
contain the same value in the AntiAffinityClassNames property, then the preferred node list is used to
select a failover target.
Failover Ring
Failover rings allow each node in the cluster to run an application instance. In the event of a failure, the
application on the failed node is moved to the next node in sequence.
Server Clusters: Cluster Configuration Best Practices 20
Node 4
Node 1
App1
Node 2
Node 3
Cluster
App2
App2 can
Failover to Node
3 in the event of
failure of Node 2
App1 can
Failover to Node
2 in the event of
failure of Node 1
App3App4
App3 can
Failover to Node
4 in the event of
failure of Node 3
App4 can
Failover to Node
1 in the event of
failure of Node 4
21. Microsoft® Windows® Server 2003 Technical Article
Figure 11: Failover Ring
This configuration has pros and cons:
Good for clusters that are supporting several small application instances where the
capacity of any node is large enough to support several at the same time.
Effect on performance of a node failure is easy to predict.
Easy to plan capacity for a single failure.
Configuration does not work well for all cases of multiple failures. If Node 1 fails, Node 2
will host two application instances and Nodes 3 and 4 will host one application instance.
If Node 2 then fails, Node 3 will be hosting three application instances and Node 4 will be
hosting one instance.
Not well suited to heavy-weight applications since multiple instances may end up being
hosted on the same node even if there are lightly-loaded nodes.
Failover rings are supported by server clusters on the Windows Server 2003 release. This is done by
defining the order of failover for a given group using the preferred owner list. A node order should be
chosen and then the preferred node list should be set up with each group starting at a different node.
Random
In large clusters or even 4-node clusters that are running several applications, defining specific failover
targets or policies for each application instance can be extremely cumbersome and error prone. The best
policy in some cases is to allow the target to be chosen at random, with a statistical probability that this
will spread the load around the cluster in the event of a failure.
Server Clusters: Cluster Configuration Best Practices 21
22. Microsoft® Windows® Server 2003 Technical Article
Random failover policies have pros and cons:
Good for clusters that are supporting several small application instances where the
capacity of any node is large enough to support several at the same time.
Does not require an administrator to decide where any given application should failover
to.
Provided that there are sufficient applications or the applications are partitioned finely
enough, this provides a good mechanism to statistically load-balance the applications
across the cluster in the event of a failure.
Configuration works well for multiple failures.
Very well tuned to handling multiple applications or many instances of the same
application running in the same cluster well.
Can be difficult to plan capacity. There is no real guarantee that the load will be balanced
across the cluster.
Effect on performance of a node failure is not easy to predict.
Not well suited to heavy-weight applications since multiple instances may end up being
hosted on the same node even if there are lightly-loaded nodes.
The Windows Server 2003 release of server clusters randomizes the failover target in the event of node
failure. Each resource group that has an empty preferred owners list will be failed over to a random node
in the cluster in the event that the node currently hosting it fails.
Customized control
There are some cases where specific nodes may be preferred for a given application instance.
A configuration that ties applications to nodes has pros and cons:
Administrator has full control over what happens when a failure occurs.
Capacity planning is easy, since failure scenarios are predictable.
With many applications running in a cluster, defining a good policy for failures can be
extremely complex.
Very hard to plan for multiple, cascaded failures.
Server clusters provide full control over the order of failover using the preferred node list feature. The full
semantics of the preferred node list can be defined as:
Server Clusters: Cluster Configuration Best Practices 22
23. Microsoft® Windows® Server 2003 Technical Article
Preferred Node List Move group to “best possible” initiated
via administrator
Failover due to node or group failure
Contains all nodes in clusterGroup is moved to highest node in
preferred node list that is up and
running in the cluster.
Group is moved to the next node on
the preferred node list.
Contains a subset of the
nodes in the cluster
Group is moved to highest node in
preferred node list that is up and
running in the cluster.
If no nodes in the preferred node list
are up and running, the group is
moved to a random node.
Group is moved to the next node on
the preferred node list.
If the node that was hosting the group
is the last on the list or was not in the
preferred node list, the group is
moved to a random node.
Empty Group is moved to a random node.Group is moved to a random node.
Server Clusters: Cluster Configuration Best Practices 23
24. Microsoft® Windows® Server 2003 Technical Article
Load Balancing
Load balancing is a term that means many different things. Within a cluster environment, load balancing
is typically used in two different ways:
• Spreading client requests across a set of clones – In Windows, Network Load Balancing
(NLB) provides the capability to spread client TCP/IP requests across a set of nodes (typically
Web servers or ISA arrays or terminal server machines). This functionality can also be
provided by external network infrastructure such as CISCO local director.
• Optimizing the compute resources across a failover cluster – A cluster that supports failover
allows application instances to “move” around the cluster. Historically this has been used to
ensure that on failure, the application can be restarted elsewhere. However, using this same
mechanism could be used with performance counters and application load metrics to provide
a way to balance the load of applications across the nodes in a cluster. This can be done in
one of two ways:
o Manual load balancing: Each group in a cluster can be moved around
independently of the other cluster groups. By breaking down applications into fine-
granularity groups where it is possible, manual load balancing can be done using the
cluster administration tools. If the load on a given node is high relative to the other
nodes in the cluster, one or more groups can be manually moved to other cluster
nodes.
o Dynamic load balancing: Today, Server clusters do not provide any load balancing
mechanism (beyond a randomized failover policy in Windows Server 2003). Dynamic
load balancing is an extremely complex issue, while moving an application may at
first appear to make sense, the cost of that move can be hard to determine. For
example, moving SQL Server may not only incur the cost of having to re-mount the
database, but it also loses the in-memory cache and state which may have a
significant impact on performance from the client perspective.
Server Clusters: Cluster Configuration Best Practices 24