This document provides guidance on planning and designing a Microsoft SQL Server 2008 infrastructure. It outlines a 6-step process to determine the necessary SQL Server roles, size requirements for databases and servers, high-level architecture, and placement of database, SSIS, SSAS and SSRS components. The appendices provide additional considerations and examples for configuring SSIS, SSAS and SSRS infrastructures.
The document provides guidance on planning and designing an infrastructure for Microsoft SQL Server 2008 and SQL Server 2008 R2. It outlines a 7-step process for determining requirements and designing the database engine, Integration Services, Analysis Services, Reporting Services, and Master Data Services components. Each step involves gathering requirements, making design decisions, and determining placement of servers and instances. The document also includes examples, job aids, and benefits of using the provided guidance.
Ruchika Goswami has over 8 years of experience as a SQL Server Database Administrator. She has expertise in SQL Server, MongoDB, Cassandra and other databases. She is currently a Senior Database Administrator at Expedia, where she maintains databases and implements high availability solutions across multiple datacenters. Previously, she held database roles at companies like John Deere and SunGard, where she performed tasks like database maintenance, upgrades, backups and resolving performance issues. She has several Microsoft certifications in SQL Server.
The document summarizes a health check for Microsoft SQL Server that assesses efficiency and effectiveness. The check evaluates how fully the SQL Server products have been utilized and considers issues like hardware resources, database tuning, and staff skills. It addresses these issues over 1 to 5 days to provide recommendations around performance, stability, and availability.
benefits of SQL Server 2008 R2 Enterprise EditionTobias Koprowski
This document contains information about a SQL Server 2008 R2 launch event, including details about the speaker. It provides the speaker's biography, listing their 12 years of experience in IT, focus areas including high availability and security, and certifications. It also lists the speaker's involvement in Microsoft programs, user groups, publishing, and technical support roles.
SQL Server 2012 is a cloud-ready information platform that helps organizations unlock breakthrough insights across the organization and quickly build solutions to extend data across on-premises and public cloud, backed by mission critical confidence. Sujit Rai, a technical expert at Convonix shares its uses in business intelligence.
SharePoint 2010 High Availability and Disaster Recovery - SharePoint Connecti...Michael Noel
This document discusses redundancy and high availability strategies for SharePoint 2010 farms. It describes the different server roles in SharePoint and how each can be made redundant. It provides examples of load-balanced web server architectures and search service application topologies using multiple query and crawl servers. The document also covers database mirroring techniques for content databases and best practices for backup and recovery.
The document provides information on the various databases that support Microsoft SharePoint 2010 products, including their relative sizes, recommended co-location, scaling guidance, and recommended SQL Server editions. It includes a chart that defines each database's purpose and characteristics. The databases support features like search, user profiles, social tagging, usage analytics, and business data connectivity.
The document provides guidance on planning and designing an infrastructure for Microsoft SQL Server 2008 and SQL Server 2008 R2. It outlines a 7-step process for determining requirements and designing the database engine, Integration Services, Analysis Services, Reporting Services, and Master Data Services components. Each step involves gathering requirements, making design decisions, and determining placement of servers and instances. The document also includes examples, job aids, and benefits of using the provided guidance.
Ruchika Goswami has over 8 years of experience as a SQL Server Database Administrator. She has expertise in SQL Server, MongoDB, Cassandra and other databases. She is currently a Senior Database Administrator at Expedia, where she maintains databases and implements high availability solutions across multiple datacenters. Previously, she held database roles at companies like John Deere and SunGard, where she performed tasks like database maintenance, upgrades, backups and resolving performance issues. She has several Microsoft certifications in SQL Server.
The document summarizes a health check for Microsoft SQL Server that assesses efficiency and effectiveness. The check evaluates how fully the SQL Server products have been utilized and considers issues like hardware resources, database tuning, and staff skills. It addresses these issues over 1 to 5 days to provide recommendations around performance, stability, and availability.
benefits of SQL Server 2008 R2 Enterprise EditionTobias Koprowski
This document contains information about a SQL Server 2008 R2 launch event, including details about the speaker. It provides the speaker's biography, listing their 12 years of experience in IT, focus areas including high availability and security, and certifications. It also lists the speaker's involvement in Microsoft programs, user groups, publishing, and technical support roles.
SQL Server 2012 is a cloud-ready information platform that helps organizations unlock breakthrough insights across the organization and quickly build solutions to extend data across on-premises and public cloud, backed by mission critical confidence. Sujit Rai, a technical expert at Convonix shares its uses in business intelligence.
SharePoint 2010 High Availability and Disaster Recovery - SharePoint Connecti...Michael Noel
This document discusses redundancy and high availability strategies for SharePoint 2010 farms. It describes the different server roles in SharePoint and how each can be made redundant. It provides examples of load-balanced web server architectures and search service application topologies using multiple query and crawl servers. The document also covers database mirroring techniques for content databases and best practices for backup and recovery.
The document provides information on the various databases that support Microsoft SharePoint 2010 products, including their relative sizes, recommended co-location, scaling guidance, and recommended SQL Server editions. It includes a chart that defines each database's purpose and characteristics. The databases support features like search, user profiles, social tagging, usage analytics, and business data connectivity.
SharePoint 2010 High Availability - SPC2CMichael Noel
This document discusses strategies for architecting a fault tolerant and high performance SharePoint 2010 farm. It covers improvements in SharePoint 2010 infrastructure like the replacement of shared service providers with isolated service applications. It also discusses different SharePoint 2010 farm roles and provides examples of farm architectures with dedicated web, application, and database servers. The document recommends database optimization techniques like distributing database files across multiple disks. It also promotes the use of SQL database mirroring to provide high availability of SharePoint content databases.
The document describes the steps to configure integration of SAP Netweaver User Management with LDAP directory service (Microsoft Active Directory 2003 is used). This involves configuring the LDAP connector, defining system users and server details, mapping user attributes between SAP and LDAP, and synchronizing user data between SAP and the LDAP repository. The Java User Management Engine can also be configured to use LDAP as the user data source.
This document discusses best practice recommendations for SharePoint farm architecture. It recommends having a dedicated SQL database server and at least two web/application servers for high availability. It also recommends virtualizing servers to reduce hardware costs and enable easy scaling and failover. For high availability, it recommends using network load balancing and SQL database mirroring across multiple servers and database instances. The document provides guidance on logical architecture, hardware/software requirements, the installation and configuration process, and enabling Kerberos authentication for security.
SharePoint 2010 High Availability - TechEd Brasil 2010Michael Noel
This document summarizes solutions for high availability and disaster recovery in SharePoint 2010. It discusses making SharePoint components like web servers, search service applications, and database servers redundant. It also covers options for database mirroring using SQL Server, including synchronous mirroring within and across sites. Sample farm architectures are presented, from small to large farms, and virtualized environments. Backup strategies using SQL maintenance plans and Data Protection Manager 2010 are also outlined.
This document provides the table of contents for the book "Microsoft SQL Server Black Book". The book contains 13 chapters that cover topics such as installation, configuration, SQL, stored procedures, performance tuning, and more. Each chapter includes an explanatory section and a "Practical Guide" section with hands-on exercises. The author's goal is to help readers harness Microsoft SQL Server's capabilities to create robust production database servers.
Effective Usage of SQL Server 2005 Database Mirroringwebhostingguy
The document discusses SQL Server 2005 database mirroring, including concepts like principal and mirror databases, transaction safety levels, and how it provides high availability and redundancy compared to other SQL Server features like failover clustering and log shipping. It also provides best practices for configuring and monitoring database mirroring for mission critical databases.
This document discusses Microsoft's SQL Azure cloud database platform. It provides an overview of SQL Azure's capabilities including scalability, manageability, and developer empowerment. Key points include:
- SQL Azure leverages existing SQL skills and tools while adding new cloud capabilities.
- It provides a dedicated and automatically replicated database infrastructure with high availability.
- Access is via common SQL client libraries connecting directly to databases.
- The initial release focuses on compatibility with common SQL Server features while future releases will add more advanced capabilities.
- Scenarios like departmental apps, web apps, and data hubs are well suited to SQL Azure in version 1.
HP PolyServe Software for Microsoft SQL Serverwebhostingguy
The HP PolyServe Software for Microsoft SQL Server enables consolidation of multiple SQL Server instances onto fewer servers and centralized storage. It provides high availability, virtualization-like flexibility, scalable capacity allocation, and instance mobility. Key features include an adaptive SQL platform, consolidated data management, and high availability for mission-critical applications. It goes beyond consolidation and failovers to meet Microsoft's SQL Server Always On requirements, making it ideal for database consolidation.
This document discusses high availability considerations for SQL Server backends in SharePoint 2010 environments. It covers why high availability is important, various SQL Server high availability and disaster recovery technologies like database mirroring and failover clustering. It also provides best practices for SQL Server and database configuration to optimize performance, including storage configuration, tempdb and content database settings. The presentation demonstrates configurations for failover clustering and database mirroring and shows how to check database health and optimization.
This document provides a step-by-step guide for configuring distributed data center virtualization using Windows Virtual Server 2005 R2, Sanbolic's Melio File System, and Microsoft Clustering Services. Key steps include installing required software, configuring a two-node Microsoft Clustering Services cluster, creating virtual machines and storage on a SAN, and configuring resources to allow active virtual machines to migrate across physical hosts while maintaining access to shared storage.
Kumar Nithick is a Windows and Exchange administrator seeking a position that allows him to grow his technical skills. He has over 12 years of experience administering Windows servers, Active Directory, Exchange, Hyper-V, and SCVMM. His most recent role involved Exchange administration, Windows migrations, troubleshooting, and coordinating with other teams. He is proficient in Dell and HP hardware support, Windows clustering, and SCVMM.
SharePoint 2010 best practices for infrastructure deployments SharePoint Sat...Knowledge Cue
Patrick Harkins presented on SharePoint 2010 infrastructure deployment best practices. He discussed technologies like SQL Server, IIS, and SharePoint. He recommended SQL 2008 R2, Windows Server 2008 R2, and naming conventions. He also covered installation best practices like scripting, SQL aliases, DNS, ports, and Kerberos authentication. Finally, he discussed typical New Zealand deployment scenarios for small to large farms.
This document provides tips for preventing server performance issues and getting servers back to full health. It discusses tools for monitoring server health like DDM probes, statistics, and the Domino Configuration Tuner. Specific issues covered include vulnerabilities, aging configurations, bad user habits, and overly powerful developer access. The summary recommends regularly reviewing server configurations using these tools as part of upgrades to proactively address performance degradation over time. It also stresses focusing probes on server roles and only monitoring essential metrics to identify potential problems early.
NZSPC 2013 - Ultimate SharePoint Infrastructure Best Practices SessionMichael Noel
Michael Noel is the author of 19 technical books on Microsoft technologies that have sold over 300,000 copies. He is a partner at Convergent Computing, an infrastructure and security consulting firm. The document provides hardware and software requirements for SharePoint 2013 environments, including recommended memory, processors, and editions of Windows Server and SQL Server. It also summarizes new features in SharePoint 2013 like the distributed cache service, request management, and claims-based authentication.
Sankar Prasad Sahu is a senior MS SQL DBA with over 10 years of experience in database administration using SQL Server. He has expertise in installation, configuration, backup/recovery, performance tuning, and troubleshooting of MS SQL servers. Currently he works as an MS SQL DBA for General Growth Properties in Hyderabad, India, providing 24/7 production support for 164 MS SQL database servers.
The document provides an overview and agenda for a presentation on Windows Azure SQL Database tips and tricks for beginners. The presentation covers SQL Azure analysis including security requirements, compatibility with different SQL Server versions, scenarios for use, and the shared environment. It also demonstrates SQL Azure features in the Azure mode and discusses the future of database administration.
SQL Server 2017 includes several new features such as Linux support, graph tables, intelligent query processing, resumable online index rebuilds, machine learning services, and in-memory tables. The document provides an overview of each new feature, including examples and demos of graph tables, intelligent query processing, resumable index rebuilds, machine learning services, and in-memory tables. It also lists resources for SQL Server on Linux and machine learning with SQL Server 2017.
The document discusses key differences between Microsoft SQL Server and Oracle Server databases. SQL Server 2016 was recently released and offers faster performance for hybrid transactional and analytical processing through new capabilities like integrating analytics into the transactional database. It also allows querying of both structured and unstructured data using T-SQL and stretches databases to Microsoft Azure for reduced storage costs and improved disaster recovery. Compared to Oracle Server, SQL Server 2016 provides stronger data security through new encryption features that encrypt data from server to client.
The document provides guidance on designing a Dynamic Datacenter infrastructure using Microsoft technologies. It outlines a 5-step process: 1) Determine scope, 2) Design virtualization hosts, 3) Design software infrastructure, 4) Design storage, and 5) Design networking. Key aspects covered include workload grouping, host hardware, virtual machine management, configuration management, monitoring, backups, switches, and load balancing. The goal is to provide a well-defined, automated, controlled, and resilient infrastructure.
SharePoint 2010 High Availability - SPC2CMichael Noel
This document discusses strategies for architecting a fault tolerant and high performance SharePoint 2010 farm. It covers improvements in SharePoint 2010 infrastructure like the replacement of shared service providers with isolated service applications. It also discusses different SharePoint 2010 farm roles and provides examples of farm architectures with dedicated web, application, and database servers. The document recommends database optimization techniques like distributing database files across multiple disks. It also promotes the use of SQL database mirroring to provide high availability of SharePoint content databases.
The document describes the steps to configure integration of SAP Netweaver User Management with LDAP directory service (Microsoft Active Directory 2003 is used). This involves configuring the LDAP connector, defining system users and server details, mapping user attributes between SAP and LDAP, and synchronizing user data between SAP and the LDAP repository. The Java User Management Engine can also be configured to use LDAP as the user data source.
This document discusses best practice recommendations for SharePoint farm architecture. It recommends having a dedicated SQL database server and at least two web/application servers for high availability. It also recommends virtualizing servers to reduce hardware costs and enable easy scaling and failover. For high availability, it recommends using network load balancing and SQL database mirroring across multiple servers and database instances. The document provides guidance on logical architecture, hardware/software requirements, the installation and configuration process, and enabling Kerberos authentication for security.
SharePoint 2010 High Availability - TechEd Brasil 2010Michael Noel
This document summarizes solutions for high availability and disaster recovery in SharePoint 2010. It discusses making SharePoint components like web servers, search service applications, and database servers redundant. It also covers options for database mirroring using SQL Server, including synchronous mirroring within and across sites. Sample farm architectures are presented, from small to large farms, and virtualized environments. Backup strategies using SQL maintenance plans and Data Protection Manager 2010 are also outlined.
This document provides the table of contents for the book "Microsoft SQL Server Black Book". The book contains 13 chapters that cover topics such as installation, configuration, SQL, stored procedures, performance tuning, and more. Each chapter includes an explanatory section and a "Practical Guide" section with hands-on exercises. The author's goal is to help readers harness Microsoft SQL Server's capabilities to create robust production database servers.
Effective Usage of SQL Server 2005 Database Mirroringwebhostingguy
The document discusses SQL Server 2005 database mirroring, including concepts like principal and mirror databases, transaction safety levels, and how it provides high availability and redundancy compared to other SQL Server features like failover clustering and log shipping. It also provides best practices for configuring and monitoring database mirroring for mission critical databases.
This document discusses Microsoft's SQL Azure cloud database platform. It provides an overview of SQL Azure's capabilities including scalability, manageability, and developer empowerment. Key points include:
- SQL Azure leverages existing SQL skills and tools while adding new cloud capabilities.
- It provides a dedicated and automatically replicated database infrastructure with high availability.
- Access is via common SQL client libraries connecting directly to databases.
- The initial release focuses on compatibility with common SQL Server features while future releases will add more advanced capabilities.
- Scenarios like departmental apps, web apps, and data hubs are well suited to SQL Azure in version 1.
HP PolyServe Software for Microsoft SQL Serverwebhostingguy
The HP PolyServe Software for Microsoft SQL Server enables consolidation of multiple SQL Server instances onto fewer servers and centralized storage. It provides high availability, virtualization-like flexibility, scalable capacity allocation, and instance mobility. Key features include an adaptive SQL platform, consolidated data management, and high availability for mission-critical applications. It goes beyond consolidation and failovers to meet Microsoft's SQL Server Always On requirements, making it ideal for database consolidation.
This document discusses high availability considerations for SQL Server backends in SharePoint 2010 environments. It covers why high availability is important, various SQL Server high availability and disaster recovery technologies like database mirroring and failover clustering. It also provides best practices for SQL Server and database configuration to optimize performance, including storage configuration, tempdb and content database settings. The presentation demonstrates configurations for failover clustering and database mirroring and shows how to check database health and optimization.
This document provides a step-by-step guide for configuring distributed data center virtualization using Windows Virtual Server 2005 R2, Sanbolic's Melio File System, and Microsoft Clustering Services. Key steps include installing required software, configuring a two-node Microsoft Clustering Services cluster, creating virtual machines and storage on a SAN, and configuring resources to allow active virtual machines to migrate across physical hosts while maintaining access to shared storage.
Kumar Nithick is a Windows and Exchange administrator seeking a position that allows him to grow his technical skills. He has over 12 years of experience administering Windows servers, Active Directory, Exchange, Hyper-V, and SCVMM. His most recent role involved Exchange administration, Windows migrations, troubleshooting, and coordinating with other teams. He is proficient in Dell and HP hardware support, Windows clustering, and SCVMM.
SharePoint 2010 best practices for infrastructure deployments SharePoint Sat...Knowledge Cue
Patrick Harkins presented on SharePoint 2010 infrastructure deployment best practices. He discussed technologies like SQL Server, IIS, and SharePoint. He recommended SQL 2008 R2, Windows Server 2008 R2, and naming conventions. He also covered installation best practices like scripting, SQL aliases, DNS, ports, and Kerberos authentication. Finally, he discussed typical New Zealand deployment scenarios for small to large farms.
This document provides tips for preventing server performance issues and getting servers back to full health. It discusses tools for monitoring server health like DDM probes, statistics, and the Domino Configuration Tuner. Specific issues covered include vulnerabilities, aging configurations, bad user habits, and overly powerful developer access. The summary recommends regularly reviewing server configurations using these tools as part of upgrades to proactively address performance degradation over time. It also stresses focusing probes on server roles and only monitoring essential metrics to identify potential problems early.
NZSPC 2013 - Ultimate SharePoint Infrastructure Best Practices SessionMichael Noel
Michael Noel is the author of 19 technical books on Microsoft technologies that have sold over 300,000 copies. He is a partner at Convergent Computing, an infrastructure and security consulting firm. The document provides hardware and software requirements for SharePoint 2013 environments, including recommended memory, processors, and editions of Windows Server and SQL Server. It also summarizes new features in SharePoint 2013 like the distributed cache service, request management, and claims-based authentication.
Sankar Prasad Sahu is a senior MS SQL DBA with over 10 years of experience in database administration using SQL Server. He has expertise in installation, configuration, backup/recovery, performance tuning, and troubleshooting of MS SQL servers. Currently he works as an MS SQL DBA for General Growth Properties in Hyderabad, India, providing 24/7 production support for 164 MS SQL database servers.
The document provides an overview and agenda for a presentation on Windows Azure SQL Database tips and tricks for beginners. The presentation covers SQL Azure analysis including security requirements, compatibility with different SQL Server versions, scenarios for use, and the shared environment. It also demonstrates SQL Azure features in the Azure mode and discusses the future of database administration.
SQL Server 2017 includes several new features such as Linux support, graph tables, intelligent query processing, resumable online index rebuilds, machine learning services, and in-memory tables. The document provides an overview of each new feature, including examples and demos of graph tables, intelligent query processing, resumable index rebuilds, machine learning services, and in-memory tables. It also lists resources for SQL Server on Linux and machine learning with SQL Server 2017.
The document discusses key differences between Microsoft SQL Server and Oracle Server databases. SQL Server 2016 was recently released and offers faster performance for hybrid transactional and analytical processing through new capabilities like integrating analytics into the transactional database. It also allows querying of both structured and unstructured data using T-SQL and stretches databases to Microsoft Azure for reduced storage costs and improved disaster recovery. Compared to Oracle Server, SQL Server 2016 provides stronger data security through new encryption features that encrypt data from server to client.
The document provides guidance on designing a Dynamic Datacenter infrastructure using Microsoft technologies. It outlines a 5-step process: 1) Determine scope, 2) Design virtualization hosts, 3) Design software infrastructure, 4) Design storage, and 5) Design networking. Key aspects covered include workload grouping, host hardware, virtual machine management, configuration management, monitoring, backups, switches, and load balancing. The goal is to provide a well-defined, automated, controlled, and resilient infrastructure.
A Simple Guide to Download SQL Server 2019.pdfSoftwareDeals
You've successfully downloaded and installed SQL Server 2019 Standard Edition. This powerful database management system opens doors to efficient data handling, improved security, and enhanced performance. Take advantage of its robust features and explore the world of seamless database management.
This document discusses the need for database synchronization between Microsoft SQL Server and MySQL databases. It proposes developing a tool that would use linked servers and T-SQL scripts to synchronize data between the two database types. The solution presented to a client involved setting up ODBC connections, linking the MySQL database to MSSQL, and creating stored procedures to handle insert, update, and delete operations between tables. Advantages are synchronization of queryable data and ability to schedule syncs as jobs. Disadvantages include needing technical expertise and tedious script creation. The proposed tool would provide a user interface for configuration and scheduling of syncs.
SQL Server is a relational database management system developed by Microsoft that supports the SQL language. It has several editions with different capabilities and limitations. The SQL Server architecture consists of three main layers - the SQL Server Network Interface, Database Engine, and SQLOS API. The Database Engine handles query processing and storage. SQL Web and Express editions have limitations such as restricted CPU cores and database size. SQL Azure also has limitations compared to on-premises SQL Server editions such as different T-SQL syntax and inability to switch databases.
DBA, LEVEL III TTLM Monitoring and Administering Database.docxseifusisay06
The document provides information about monitoring, administering, and tuning a SQL Server database, including:
1) Steps for installing and configuring SQL Server.
2) The importance of database monitoring to track performance and ensure availability.
3) Tools that can be used for database monitoring and performance tuning.
4) Activities involved in database maintenance and the different editions of SQL Server 2008.
5) Methods for installing SQL Server, including local, unattended, and remote installations.
This document provides an introduction to Azure SQL Database. It describes Azure SQL Database as a fully managed relational database service. It notes that Azure SQL Database differs from SQL Server in some ways, such as not supporting certain T-SQL constructs and commands. The document also discusses server provisioning, database deployment, monitoring, and new service tiers for Azure SQL Database that offer different levels of scalability, performance, and business continuity features.
This document provides an overview of Module 1 which introduces SQL Server 2008 R2 and its toolset. It covers the SQL Server platform architecture, components, instances, and editions. It also discusses tools for working with SQL Server like SQL Server Management Studio, Business Intelligence Development Studio, and Books Online. Finally, it reviews configuring SQL Server services, network ports, server aliases, and tools like SQL Server Profiler. The module includes exercises and a lab scenario to configure a new SQL Server instance for a separate company using an existing server.
Chetan Kumar has over 5 years of experience as a SQL Server DBA. He has extensive experience installing, configuring, and administering Microsoft SQL Server databases. Some of his responsibilities include database backups, performance tuning, high availability configurations, troubleshooting issues, and providing 24/7 production support. He is currently working as a SQL Server DBA for Misys managing their SQL Server infrastructure.
Microsoft released SQL Azure more than two years ago - that's enough time for testing (I hope!). So, are you ready to move your data to the Cloud? If you’re considering a business (i.e. a production environment) in the Cloud, you need to think about methods for backing up your data, a backup plan for your data and, eventually, restoring with Red Gate Cloud Services. In this session, you’ll see the differences, functionality, restrictions, and opportunities in SQL Azure and On-Premise SQL Server 2008/2008 R2/2012. We’ll consider topics such as how to be prepared for backup and restore, and which parts of a cloud environment are most important: keys, triggers, indexes, prices, security, service level agreements, etc.
Here are a few questions I have after the material covered so far:
1. What is the difference between the compute pool and app pool in a SQL Server 2019 big data cluster?
2. How does Polybase work differently in a SQL Server 2019 big data cluster compared to a traditional Polybase setup?
3. What components make up the storage pool in a SQL Server 2019 big data cluster?
Rahul Mehrotra has over 12 years of experience in IT and 9 years in non-IT. He has extensive experience administering Microsoft SQL Server and Oracle databases, including installation, configuration, security, backups, high availability, and performance tuning. He also has experience with database development, virtualization, Windows administration, and supporting applications.
Enter The Dragon - SQL 2014 on Server Core - SQLSaturday #341 Porto EditionMark Broadbent
In 1982 the Dragon32 entered the home computer market but unfortunately there was one small problem ...lower-case letters were almost impossible to access. Two years later Dragon was no more. In 2008 Microsoft released ServerCore providing a fast and streamlined (but reduced functionality) edition of Windows with minimal GUI support and with Windows 2012/R2 Server Core is "by default". Ever since SQL 2012, installation onto Server Core became a "supported" option and meant the single biggest administrative shift for a DBA since the release of SQL 2005 ...or did it? In this exciting session we will discuss administration, configuration and installation of both SQL 2014 and ServerCore in both standalone and advanced AlwaysOn configurations but will the lack of GUI support send Server Core in the same direction as the Dragon32 or spell a bright new beginning for Server based computing and SQL Server?
CTE Ottawa Seminar Day - September 7th, 2012
This clinic introduces the key features and enhancements in SQL Server 2012. It is designed to provide a high-level overview of the product and the key new capabilities in this release.
This course is intended for technology managers and database professionals who want to understand the key capabilities of SQL Server 2012. In many cases, it is assumed that senior technical managers may attend this clinic in order to assess the further training that their technology-focused employees will need in order to adopt SQL Server 2012.
SQL Azure Database provides a relational database service running on Microsoft's cloud platform. Future plans include improved database cloning for backups, scale-out support through dynamic database splitting and merging, and improved connectivity between on-premises and cloud databases using synchronization technologies. The goal is to provide a highly scalable database service with a seamless experience for both developers and administrators.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
2. What Is IPD?
Guidance that aims to clarify and streamline the planning and
design process for Microsoft® infrastructure technologies
IPD:
Defines decision flow
Describes decisions to be made
Relates decisions and options for the business
Frames additional questions for business understanding
Page 2 |
4. Purpose and Agenda
Purpose
To assist in the decision process to plan a successful
Microsoft SQL Server 2008 implementation
Agenda
What is SQL Server 2008?
Review the design flow
Determine the project scope
Determine which roles will be required
Design the infrastructure for each role
Page 4 |
5. What Is Microsoft SQL Server 2008?
Microsoft SQL Server 2008 is a database solution
that includes four primary components:
Database Engine
Integration Services
Analysis Services
Reporting Services
Page 5 |
6. Microsoft SQL Server 2008 - Out of Scope
Microsoft SQL Server 2008 does not address the
following items:
In-place upgrades
Side-by-side upgrades
Developer, Express, Compact, and Evaluation editions
of SQL Server
SharePoint® integration
Database design
Page 6 |
9. Step 1. Determine Project Scope
The project scope will be determined in order to align the
goals of the project with the business motivation:
Task 1: Determine applications in scope
The results of this step will be used to determine which
SQL Server roles will be required.
1 2 3 4 5 6
Page 9 |
10. Step 2. Determine Which Roles Will Be
Required
The product roles required to deliver the business
requirements and desired functionality will be identified to:
Task 1: Determine if Database Engine will be required
Task 2: Determine if Integration Services will be required
Task 3: Determine if Analysis Services will be required
Task 4: Determine if Reporting Services will be required
1 2 3 4 5 6
Page 10 |
11. Step 3. Design the SQL Server Database
Engine Infrastructure
The database requirements will be gathered and the database
infrastructure will be designed from those requirements to:
Task 1: Determine capacity and performance requirements
Storage needs should be calculated for the database, transaction log, indexes,
and tempdb database.
After estimating the database size with the formula provided in the guide, add
about 5% for database overhead.
Estimate IOPS and throughput as accurately as possible since
both of these can cause performance degradation if not
properly calculated.
1 2 3 4 5 6
Page 11 |
12. Step 3. Design the Database Infrastructure
(continued)
The database requirements will be gathered and the database
infrastructure will be designed from those requirements to:
Task 2: Determine whether to place the database in an
existing instance
A separate instance might be required because of memory isolation needs;
different fault tolerance, authentication, security concerns; or regulatory or
support requirements.
Task 3: Determine whether to place the instance on existing
server running SQL Server or new server
However, each additional instance adds overhead.
1 2 3 4 5 6
Page 12 |
13. Step 3. Design the Database Infrastructure
(continued)
The database requirements will be gathered and the database
infrastructure will be designed from those requirements to:
Task 4: Determine the number of servers after addressing
scaling-out and fault-tolerance needs
Task 5: Determine placement of each new instance
Task 6: Select the server hardware
1 2 3 4 5 6
Page 13 |
14. Step 4. Design the SQL Server Integration
Services Infrastructure
If it was determined that SQL Server Integration Services
(SSIS) is required in the organization, the SSIS
infrastructure will be designed in this step to:
Task 1: Determine resource requirements
Task 2: Decide where the Integration Services packages
will be stored
Task 3: Determine number of SSIS servers required
Task 4: Determine placement
1 2 3 4 5 6
Page 14 |
15. Step 5. Design the SQL Server Analysis
Services Infrastructure
If it was determined that SQL Server Analysis Services (SSAS)
is required in the organization, the SSAS infrastructure will be
designed in this step to:
Task 1: Determine resource requirements
SSAS uses OLAP databases, or cubes, stored on the file system.
Processing of OLAP databases is read and write-intensive.
Product group recommends 4–8 GB of memory per processor core.
1 2 3 4 5 6
Page 15 |
16. Step 5. Design the SQL Server Analysis
Services Infrastructure (continued)
If it was determined that SQL Server Analysis Services (SSAS)
is required in the organization, the SSAS infrastructure will be
designed in this step to:
Task 2: Determine SQL Server version
Task 3: Decide whether scalable shared databases will be used
Task 4: Determine scaling needs
Task 5: Decide whether to cluster
Task 6: Determine placement
1 2 3 4 5 6
Page 16 |
17. Step 6. Design the SQL Server Reporting
Services Infrastructure
If it was determined that SQL Server Reporting Services
(SSRS) is required in the organization, the SSRS infrastructure
will be designed in this step to:
Task 1: Determine resource requirements. Depends on:
Disk storage needed for SSRS databases
Memory: 2–4 per processor core
Task 2: Determine placement of the report server databases
Databases can be hosted either on the Reporting Services server or on
a remote database server.
1 2 3 4 5 6
Page 17 |
18. Step 6. Design the SQL Server Reporting
Services Infrastructure (continued)
If it was determined that SQL Server Reporting Services
(SSRS) is required in the organization, the SSRS infrastructure
will be designed in this step to:
Task 3: Determine scaling and fault-tolerance approach
Load balancing for both scale out and fault tolerance
Clustering for report server databases
Task 4: Determine placement of the SSRS server
1 2 3 4 5 6
Page 18 |
19. Summary and Conclusion
The SQL Server 2008 guide has outlined the process for planning
the SQL Server infrastructure:
Choices of roles needed
Server resources
Scaling
Fault tolerance
Using the information recorded from the steps completed in the guide,
the organization can help ensure that they meet business and technical
requirements for a successful SQL Server 2008 deployment.
Provide feedback to satfdbk@microsoft.com
Page 19 |
20. Find More Information
Download the full document and other IPD guides:
www.microsoft.com/ipd
Contact the IPD team:
satfdbk@microsoft.com
Visit the Microsoft Solution Accelerators Web site:
www.microsoft.com/technet/SolutionAccelerators
Page 20 |
22. Appendix A: Additional Considerations
Different editions of SQL Server are available to meet the
various needs of an organization
What the organization wants to accomplish will determine
which version of SQL Server will be implemented to meet
those needs
Page 22 |
26. Appendix E: Job Aid
Step Requirement description Step Requirement description
Step 1 Name of business applications Step 4 Does each component of the system meet the capacity and
performance requirements?
Step 2 Names of relational or OLTP databases required Source data type
Names of data warehouse databases required Destination data type
SSIS required? SSIS server version
SSAS required? Name of SQL Server instance or file share where packages
are stored
SSRS required? Database Engine required?
* requires Database Engine also Number of SSIS servers
Step 3 Disk storage requirements SSIS server: virtualized or physical environment?
IOps and throughput requirements SSIS - SQL Server role coexist with other SQL Server roles
on same server?
Database in new or existing instance? Step 5 Disk storage space requirements
Will instance reside on existing server running SQL Server or SSAS server version
new server?
Number of servers required to support scale-out options, if Will scalable shared databases be used?
selected
Number of servers required to support fault-tolerance option, What are scaling needs? (scale up or scale out)
if selected
Failover clustering protection required? Will failover clustering be used?
In which office or geographical location will instance be SSAS server: virtualized or physical environment?
placed?
Will instance be on physical server or virtualized SSAS - SQL Server role coexist with other SQL Server roles
environment? on same server?
Number of CPUs required Step 6 Record database requirements (size of the ReportServer and
ReportServerTempDB databases)
Required architecture Databases hosted locally or on a remote database server?
Required processor speed Record which databases are hosted locally or on a remote
server
Amount of memory required Number of servers to support SSRS
Disk subsystem configuration SSRS – SQL Server coexist with other SQL Server roles on
same server?
Number of network adapters SSRS server: virtualized or physical environment?
Page 26 |
Infrastructure Planning and Design (IPD) is a series of planning and design guides created to clarify and streamline the planning and design process for Microsoft® infrastructure technologies. Each guide in the series addresses a unique infrastructure technology or scenario. These guides include the following topics:Defining the technical decision flow (flow chart) through the planning processDescribing the decisions to be made and the commonly available options to consider in making the decisionsRelating the decisions and options for the business in terms of cost, complexity, and other characteristicsFraming the decisions in terms of additional questions for the business to ensure a comprehensive understanding of the appropriate business landscapeThe guides in this series are intended to complement and augment Microsoft product documentation.
The guide is designed to provide a consistent structure for addressing the decisions and activities that are most critical to the successful implementation of Microsoft® SQL Server® 2008 data management software. Each decision or activity is subdivided into four elements:Background on the decision or activity, including context setting and general considerations.Typical options or tasks to perform for the activity. Reference section evaluating the tasks in terms of cost, complexity, and manageability.Questions for the business that may have a significant impact on the decisions to be made.
The purpose of this presentation is to address the decisions and/or activities that need to occur in planning a Microsoft SQL Server 2008 implementation. The following slides will take an IT pro through the most critical design elements in a well-planned SQL Server 2008 design.
SQL Server 2008 includes four primary components:Database Engine. The Database Engine is the core service for storing, processing, and securing data.Integration Services. Microsoft SQL Server Integration Services (SSIS) is a platform for building data integration solutions from heterogeneous sources, including packages that provide extract, transform, and load (ETL) processing for data warehousing.Analysis Services. Microsoft SQL Server Analysis Services (SSAS) supports OLAP (online analytical processing) and data mining functionalities. This allows a database administrator to design and create multidimensional structures that contain data aggregated from other data sources, such as relational databases.Reporting Services. Microsoft SQL Server Reporting Services (SSRS) delivers enterprise reporting functionality for creating reports that gather content from a variety of data sources, publishing the reports in various formats, and centrally managing their security and subscriptions.
Out of Scope. This guide does not address the following:In-place upgrades, where an older instance of SQL Server is upgraded to SQL Server 2008 on the existing server; therefore, no infrastructure changes occur.Side-by-side upgrades, where a new instance of SQL Server 2008 is installed on the same server as an instance of SQL Server 2005, and then the data is moved, with no infrastructure changes occurring.Developer, Express, Compact, and Evaluation editions of SQL Server. The Developer edition has all the capabilities of the Enterprise edition but is not licensed for any form of production use. The Express edition is an entry level database for learning and ISV redistribution. The Compact edition is an embedded database for developing desktop and mobile applications. The Evaluation edition is used for evaluating SQL Server.Reporting Services can integrate with Microsoft Office SharePoint® Server 2007 to provide a user interface (UI) to administer, secure, manage, view, and deliver reports. SharePoint integration will not be covered in this guide.Database design as it addresses the structure of the actual database.
This diagram is one possible example of a SQL Server 2008 architecture for illustrative purposes only.To provide a perspective on how the different SQL Server components complement each other, a possible scenario (steps 1–5) is pictured in this figure and described below:An application takes input from users and writes entries to an OLTP database as it occurs. Periodically, SQL Server Integration Services (SSIS) extracts the data from the OLTP database, combines it with other data existing in the organization: perhaps another database system or some flat file exports from legacy systems. SSIS then transforms the data, standardizes the formats of the data—for example, state names versus abbreviations—and then loads the data into another database, in this case, a SQL Server data warehouse.SQL Server Analysis Services (SSAS) is then used to extract data from the data warehouse and place it into OLAP cubes. Cubes allow for complex analytical and as-needed queries with a rapid execution time.Managerial decision makers can use Microsoft Excel® spreadsheet software or other applications to perform data mining to detect and predict trends in the business.SQL Server Reporting Services (SSRS) is used to publish reports to other users in the organization, for example, salespeople who need to know current production levels. These reports are generated either on-demand or on a scheduled basis.
This is the SQL Server 2008 design flow. The details of the design will be discussed in the following slides.
Step 1. Determine Project ScopeThe project scope will be determined in order to align the goals of the project with the business motivation.Task 1: Determine applications in scopeThe project scope will be determined in order to align the project goals with the business motivation. The results of this step will be used to determine which SQL Server roles will be required. Understanding the needs of the business provides a design that meets the business requirements. The database administrator (DBA) can serve as a major influence for the design team given the DBA’s working knowledge of the SQL Server features.There are two primary architecture approaches—application-specific or as a service—for designing a SQL Server implementation.In the application-specific approach, the infrastructure is designed to support a specific application. This requires application requirements gathering, determining required SQL Server roles, and specific application server design optimization. This is the approach used in the guide. In the second approach, consider the SQL Server needs of many applications across the enterprise and design SQL Server as a service. The SQL Server as a service approach can be made available as a general platform for the business units to deliver standardization and economies of scale. The IT department can determine the SQL Server needs of specific applications based on data throughput, storage, memory, fault tolerance, security, and availability requirements. Then standardized hardware is purchased, and the IT department can balance incoming requests against existing capacity and performance and allocate the appropriate resources. This guide is not written for this approach; however, all of the tasks will be applicable so it may provide a starting point. With this approach, several steps in this guide may need to be repeated as each role is designed.
Step 2. Determine Which Roles Will Be Required In this step, the product roles required will be identified.Task 1: Determine if Database Engine will be required The Database Engine is the core service for storing, processing, and securing data. It supports OLTP databases. If the application being implemented requires one or more OLTP databases, record the database names in Table A-1 in the Appendix in the guide. Other services that may be selected in later tasks could determine the need for the Database Engine; Reporting Services requires access to an OLTP database server to store metadata, and Integration Services can store packages in the msdb database or on the file system. Task 2: Determine if Integration Services will be required SQL Server Integration Services (SSIS) can connect to a variety of data sources to extract data, transform data to compatible formats, merge it into one dataset, and load it into one or more destinations, including flat files, raw files, and relational databases. See Appendix A in this presentation deck for one SSIS configuration. SSIS may be required if the organization needs to:Merge data from heterogeneous data sources. Populate data warehouses and data marts. Cleanse and standardize data.Automate administrative functions and data loading. Task 3: Determine if Analysis Services will be required SQL Server Analysis Services (SSAS) supports OLAP (online analytical processing) and data mining functionalities. This allows a DBA to design and create multidimensional structures that contain data aggregated from other data sources, such as relational databases. SSAS may be needed if reports need to be accessed rapidly with varying degrees of granularity (for example, yearly totals, monthly totals, quarterly totals, and individual transactions). See Appendix B in this presentation deck for one SSAS configuration.Task 4: Determine if Reporting Services will be required SQL Server Reporting Services (SSRS) delivers enterprise reporting functionality for creating reports that gather content from a variety of data sources, publishing the reports in various formats, and centrally managing their security and subscriptions. SSRS can be used to generate reports on OLTP databases, SSAS cubes, data warehouses, data marts, or third-party data sources such as flat files, Oracle databases, or Web services. See Appendix C in this deck for one SSRS configuration.
Step 3. Design the Database InfrastructureIn this step, the database requirements will be gathered and the database infrastructure will be designed from those requirements.Task 1: Determine capacity and performance requirementsDisk storage required. For databases that don’t yet exist, an estimate will need to be made of the disk storage required. Storage needs should be calculated for the database, transaction log, indexes, and tempdb database.IOPS and throughput required. Since the main function of SQL Server is to manipulate data, and that data resides either in memory or on the I/O subsystem, any I/O performance problems will result in performance degradation of SQL Server. Although it may not be possible to calculate the required IOPS in advance, benchmarks for some workloads may be available from SAN and disk vendors that may provide a baseline for estimating the required performance and the disk storage configuration required to deliver that performance level.
Step 3. Design the Database Infrastructure (continued)Task 2: Determine whether to place the database in an existing instanceRegulatory requirements, memory isolation, fault tolerance, authentication, security concerns, and support requirements might be potential reasons why a database might not be able to be located within an existing instance.Task 3: Determine whether to place the instance on an existing server running SQL Server or new serverRunning multiple instances on a server is ideal to support multiple test and dev environments, to support multiple applications on a server but have each application in its own instance, or to securely isolate databases, but each additional instance adds overhead.
Step 3. Design the Database Infrastructure (continued)Task 4: Determine the number of serversSeveral factors may change the number of SQL Server-based servers: whether scaling out will be implemented, and whether fault tolerance will be implemented at the database-level or instance level. Task 5: Determine placement of each new instanceFor each instance identified in Task 1, determine location of instance, whether instance will be on a physical or virtual server, and whether instance will be on existing or new hardware. Task 6: Select the server hardwareLittle specific architectural information is available to determine hardware for servers. Hardware vendors provide sizing calculators to help design server hardware. See “Additional Reading” section for links to these tools. In addition to using a calculator, considerations for the hardware are given below.CPU. Many variables can impact CPU utilization. The following are guidelines when selecting new hardware:Use 64-bit processors. Multi-core and multiple processors. Doubling the number of CPUs is preferred, but it does not guarantee twice the performance.L2 or L3 cache. Larger L2 or L3 processor caches generally provide better performance and often play a bigger role than raw CPU frequency.Memory. For OLTP applications, the product group recommends 2–4 GB of memory per processor core, and for data warehouses, 4–8 GB of memory per processor core. Disk subsystem. Design the disk subsystem to provide the required storage space, deliver the performance to support the required number of IOPS, and provide protection against hardware failures. If a SAN is used, this may include redundant HBAs and switches.Network adapters. Some hardware manufacturers provide network adapters and drivers providing fault tolerance or network port teaming for increased throughput.
Step 4. Design the SQL Server Integration Services Infrastructure The SSIS infrastructure will be designed in this step. If it was determined that SSIS is not required, go to the next step.Task 1: Determine resource requirements Data cannot be transformed faster than it can be read or written, and since there isn’t a calculator available to determine capacity and performance for the SSIS role, this task presents several items for consideration:SSIS takes data from one source and moves it to another, so the areas to consider include the data source, the network from data source to the SSIS server, the SSIS server, the network to the destination, and the destination server. I/O and storage needs for SSIS itself are minimal; however, the sources and targets will be impacted as data is read and written, respectively.A recommended range of memory is 2–8 GB per processor core, with an average of 4 GB per processor core.The total volume of data should be calculated to the best of the organization’s ability, and an estimate made of whether the network and I/O subsystems will perform as required. SQL Server comes in 32-bit and 64-bit versions. Depending on the Open Database Connectivity (ODBC) drivers available for the source and destination, a particular version may be required. Task 2: Determine where the Integration Services packages will be storedAn Integration Services package is the set of instructions that is retrieved and executed against the data. Packages can be stored in the SQL Server msdb database or on the file system.Task 3: Determine number of SSIS servers required SSIS does not support clustering or provide for scaling out to automatically load balance. If the load exceeds the capacity of the server to meet the business’s requirements, add additional servers to perform parallel loads or processing and manually split up the SSIS tasks among the servers.Task 4: Determine placement Network throughput is important with the SSIS role as it is moving data from one system to another. Factors such as politics, policy, network constraints, proximity to data sources, and geography may impact the decision of where to place the SSIS. SSIS can be run in a virtualized environment and the SSIS role can be installed on an existing server.
Step 5. Design the SQL Server Analysis Services InfrastructureIf it was determined that SQL Server Analysis Services (SSAS) is required in the organization, the SSAS infrastructure will be designed in this step. If it was determined that SSAS is not required, go to the next step.Task 1: Determine resource requirementsSSAS uses online analytical processing (OLAP) multi-dimensional databases, also called cubes, which are stored in a folder on the file system.Disk Storage RequiredCube sizes depend on the size of the fact tables and dimension members. If no other data is available, a good starting point may be to plan to allocate approximately 20–30 percent of the amount of space required for the same data stored in the underlying relational database, if that’s where the data is originating.Aggregations are typically less than 10 percent of the size of the data stored in the underlying relational database, but they can vary based on the number of aggregations.During processing, SSAS stores copies of the objects in the processing transaction on disk until it is finished, and then the processed copies of the objects replace the original objects. Therefore, sufficient additional disk space must be provided for a second copy of each object. Memory and ProcessorIt is recommended to have 4–8 GB of memory per processor core.
Step 5. Design the SQL Server Analysis Services Infrastructure(continued)Task 2: Determine SQL Server versionSQL Server comes in 32-bit and 64-bit versions. Depending on the ODBC drivers available for the source and destination, a particular version may be required. Task 3: Decide whether scalable shared databases will be usedScalable shared database can be used to scale out querying or reporting loads. With scalable shared databases, a read-only copy of the database on a shared drive is shared between several SSAS servers. The benefits of using shared databases include more processors available and more memory available to process queries, however, proper disk configuration is important for the disk containing the database.Task 4: Determine scaling needsIf cube processing affects query performance or if processing can’t occur during times of reduced query load, consider moving processing tasks to a staging server and then performing an online synchronization of the production server and the staging server. Processing can also be distributed across multiple instances of Analysis Services by using remote partitions. Processing remote partitions uses the processor and memory resources on the remote server instead of the resources on the local computerTask 5: Decide whether to clusterSSAS natively supports failover clusters (formerly known as server clusters or as MSCS) in Windows Server®to maximize availability.Task 6: Determine placementFactors such as politics, policy, network constraints, proximity to data sources, and geography, can determine where to place the SSAS servers. SSAS can be run in a virtualized environment, if the memory, disk and network requirements do not exceed the throughput capabilities of the virtual machine.SQL Server roles can coexist with other SQL Server roles on the same server.
Step 6. Design the SQL Server Reporting Services InfrastructureIf it was determined that SQL Server Reporting Services (SSRS) is required in the organization, the SSRS infrastructure will be designed in this step. If it was determined that SSRS is not required, the SQL Server infrastructure planning is complete.Task 1: Determine resource requirementsIn this task, the performance requirements will be assessed. No hard guidance is available for determining the processor, disk performance, or networking requirements for a typical scenario as they are all different, but there are a few factors that can be taken into consideration:Size and complexity of report definitions. Data source. Whether reports are executed from cached or snapshot data or from live data. Format requested when rendering a report. Formats such as PDF or Excel are more resource-intensive than HTML. Disk Storage. A report server database, or catalog, provides internal storage to one or more report servers. Each SSRS server must connect to two databases, named ReportServer and ReportServerTempDB by default. The ReportServer database stores the report definitions, resources, and configurations, while the ReportServerTempDB database stores all of the temporary snapshots while reports are running. See the guide for more information about the database sizes.Memory Requirements. The product group recommends 2–4 GB of memory per processor core.Task 2: Determine placement of the report server databasesThe report server databases can be hosted either on the Reporting Services server or on a remote database server.
Step 6. Design the SQL Server Reporting Services Infrastructure(continued)Task 3: Determine scaling and fault-tolerance approachScale-out deployments are used to provide fault tolerance and increase scalability of report servers in order to handle additional concurrent users and larger report execution loads. It can also be used to dedicate specific servers to process interactive or scheduled reports. Although moving to a scale-out configuration requires a move to the Enterprise edition of Reporting Services, doing so has a number of advantages in addition to raw capacity. These include the following:Multiple report servers offer better availability; if a single server fails, the overall system continues to answer queries.Additional machines can take advantage of dedicated memory address space, without having to move to 64-bit hardware.While SSRS is not supported in an MSCS cluster configuration, the server hosting the report server database can be set up in a failover cluster configuration. Task 4: Determine placement of the SSRS serverFor smaller environments, the SSRS server can be implemented on the same system as other SQL Server services. SSRS is supported in a virtualized environment if the memory, disk and network requirements do not exceed the throughput capabilities of the virtual machine.
Summary and ConclusionThe guide has outlined the step-by-step process for planning a SQL Server 2008 infrastructure. In each step, major decisions relative to the SQL Server infrastructure were determined and explained. The guide has explained how to record choices of roles needed, server resources, scaling, and fault tolerance, which can then be made available to the infrastructure planners.Using the information recorded from the steps completed in this guide, organizations can help ensure that they meet business and technical requirements for a successful SQL Server 2008 deployment.
Additional ConsiderationsDifferent editions of SQL Server are available to meet the various needs of an organization. Depending on what the organization wants to accomplish will determine which version of SQL Server will be implemented to meet those needs. The following provides an overview of the different SQL Server editions available:Microsoft SQL Server 2008 Standard. The SQL Server Standard edition is a full-featured data platform for running small-scale to medium-scale online transaction processing (OLTP) applications and basic reporting and analytics. Microsoft SQL Server 2008 Enterprise. The SQL Server 2008 Enterprise edition provides additional features for scalability and performance, high availability (HA), security, data warehousing, and business intelligence (BI) tools, and supports a maximum of 50 instances.Microsoft SQL Server 2008 Workgroup. The SQL Server 2008 Workgroup edition provides secure, remote synchronization, and capabilities for running branch applications. It includes the core database features of the SQL Server product line and is easy to upgrade to Standard or Enterprise.Microsoft SQL Server 2008 Web. The SQL Server 2008 Web edition is specifically designed for highly available, Internet-facing, Web-serving environments.
This figure provides a graphic overview of many of the possible inputs and outputs for SSIS in a SQL Server 2008 infrastructure.
This figure provides a graphic overview of inputs and outputs for SSAS in a SQL Server 2008 infrastructure.
This figure provides a graphic overview of inputs and outputs for SSRS in a SQL Server 2008 infrastructure.