This is the PASS DW/BI Webinar for SQL Server Reporting Services (SSRS) Disaster Recovery webinar. You can find the video at: http://www.youtube.com/watch?v=gfT9ETyLRlA
Introduction to Microsoft's Big Data Platform and Hadoop PrimerDenny Lee
This is my 24 Hour of SQL PASS (September 2012) presentation on Introduction to Microsoft's Big Data Platform and Hadoop Primer. All known as Project Isotope and HDInsight.
Oracle in the 2014 edition of its Open World rolled out new database public cloud service with its DBaaS offerings, but this is just a piece in each company's technological architecture. Businesses still have the need to create a Private cloud and discover the driver to create it; Wether it is a measured service,consolidation or rapid provisioning, finding this driver will be the initial building block for it. This presentation will give you an insight on how a Private Cloud is architected, how the service catalog is the most important brick and how get the benefit of this upcoming era of Databases.
Migrating Oracle Databases to Exadata requires careful preparation to simplify and optimize databases for best performance and availability. The document discusses key points:
1. Preparation is essential to remove unnecessary objects and optimize databases before migrating.
2. Different migration methods like transportable tablespaces, data pump, or GoldenGate have advantages depending on environment and goals.
3. A fast network reduces migration time, but other bottlenecks like source system I/O or small transfers must also be addressed.
DBaaS - The Next generation of database infrastructureEmiliano Fusaglia
Database as a Service (DBaaS) delivers database functionality as an on-demand cloud service, masking complexity. It offers flexible, scalable, secure databases with self-service provisioning and consolidated resources. DBaaS provides advantages over traditional databases like lower costs, faster provisioning, and increased efficiency through standardization and automation. DBaaS can be implemented through virtualization or using Oracle's Grid Infrastructure and multitenant database features which provide high availability, scalability, and performance isolation through resource management. DBaaS offers a standardized platform that can be engineered once and used for multiple applications in a pay-as-you-grow model.
SQL Server 2016 introduces new capabilities to help improve performance, security, and analytics:
- Operational analytics allows running analytics queries concurrently with OLTP workloads using the same schema. This provides minimal impact on OLTP and best performance.
- In-Memory OLTP enhancements include greater Transact-SQL coverage, improved scaling, and tooling improvements.
- The new Query Store feature acts as a "flight data recorder" for databases, enabling quick performance issue identification and resolution.
Oracle 12c introduces a new multitenant architecture that allows multiple databases to share common resources in a container database (CDB). This lowers costs by reducing instance overhead, storage costs, and DBA resource costs. It also improves manageability through fast provisioning, easier patching and upgrades, and separation of duties. Key features include pluggable databases (PDBs) that can be moved easily, cloning of databases locally and remotely, and improved patching and upgrading that is simpler and faster. Flex ASM adds high availability by avoiding single points of failure and supports larger LUN sizes and more disk groups. ASM disk scrubbing provides automatic error correction using mirrored data. Rebalance operations now provide estimates beforehand and improved accuracy of estimates
This document summarizes updates to the EDB Postgres Platform for winter 2017, including:
- EDB Postgres Advanced Server 9.6 which adds features like Oracle-compatible advanced queuing and nested subprocedures to help migrate more applications from Oracle, manage larger datasets, and improve integration.
- Backup and Recovery 2.0 which enables faster backups using block-level incremental change capture.
- Replication Server 6.1 which adds support for Oracle 12c and SQL Server 2014, and allows parallel replication between multiple active nodes for improved performance.
Introduction to Microsoft's Big Data Platform and Hadoop PrimerDenny Lee
This is my 24 Hour of SQL PASS (September 2012) presentation on Introduction to Microsoft's Big Data Platform and Hadoop Primer. All known as Project Isotope and HDInsight.
Oracle in the 2014 edition of its Open World rolled out new database public cloud service with its DBaaS offerings, but this is just a piece in each company's technological architecture. Businesses still have the need to create a Private cloud and discover the driver to create it; Wether it is a measured service,consolidation or rapid provisioning, finding this driver will be the initial building block for it. This presentation will give you an insight on how a Private Cloud is architected, how the service catalog is the most important brick and how get the benefit of this upcoming era of Databases.
Migrating Oracle Databases to Exadata requires careful preparation to simplify and optimize databases for best performance and availability. The document discusses key points:
1. Preparation is essential to remove unnecessary objects and optimize databases before migrating.
2. Different migration methods like transportable tablespaces, data pump, or GoldenGate have advantages depending on environment and goals.
3. A fast network reduces migration time, but other bottlenecks like source system I/O or small transfers must also be addressed.
DBaaS - The Next generation of database infrastructureEmiliano Fusaglia
Database as a Service (DBaaS) delivers database functionality as an on-demand cloud service, masking complexity. It offers flexible, scalable, secure databases with self-service provisioning and consolidated resources. DBaaS provides advantages over traditional databases like lower costs, faster provisioning, and increased efficiency through standardization and automation. DBaaS can be implemented through virtualization or using Oracle's Grid Infrastructure and multitenant database features which provide high availability, scalability, and performance isolation through resource management. DBaaS offers a standardized platform that can be engineered once and used for multiple applications in a pay-as-you-grow model.
SQL Server 2016 introduces new capabilities to help improve performance, security, and analytics:
- Operational analytics allows running analytics queries concurrently with OLTP workloads using the same schema. This provides minimal impact on OLTP and best performance.
- In-Memory OLTP enhancements include greater Transact-SQL coverage, improved scaling, and tooling improvements.
- The new Query Store feature acts as a "flight data recorder" for databases, enabling quick performance issue identification and resolution.
Oracle 12c introduces a new multitenant architecture that allows multiple databases to share common resources in a container database (CDB). This lowers costs by reducing instance overhead, storage costs, and DBA resource costs. It also improves manageability through fast provisioning, easier patching and upgrades, and separation of duties. Key features include pluggable databases (PDBs) that can be moved easily, cloning of databases locally and remotely, and improved patching and upgrading that is simpler and faster. Flex ASM adds high availability by avoiding single points of failure and supports larger LUN sizes and more disk groups. ASM disk scrubbing provides automatic error correction using mirrored data. Rebalance operations now provide estimates beforehand and improved accuracy of estimates
This document summarizes updates to the EDB Postgres Platform for winter 2017, including:
- EDB Postgres Advanced Server 9.6 which adds features like Oracle-compatible advanced queuing and nested subprocedures to help migrate more applications from Oracle, manage larger datasets, and improve integration.
- Backup and Recovery 2.0 which enables faster backups using block-level incremental change capture.
- Replication Server 6.1 which adds support for Oracle 12c and SQL Server 2014, and allows parallel replication between multiple active nodes for improved performance.
This document discusses SQL Server 2019 and provides the following information:
1. It introduces Javier Villegas, a technical speaker and SQL Server expert.
2. It outlines several new capabilities in SQL Server 2019 including artificial intelligence, container support, and big data analytics capabilities using Apache Spark.
3. It compares editions and capabilities of SQL Server on Windows and Linux and notes they are largely the same.
Mining the AWR: Alternative Methods for Identification of the Top SQLs (inclu...Maris Elsins
A typical tuning session on a resource-constrained system starts with a search for "low-hanging fruit." In a CPU-bound database system, it would be the SQL that uses CPU the most, in an I/O-bound system, the SQL doing the most physical reads, and so on. Tuning the TOP statements often allows us to free large portions of the utilized resources and remove bottlenecks. Often, we can use AWR reports to quickly identify the SQL_IDs of the top statements in the database. But what if the AWR report reveals no "low-hanging fruit," and the resource usage is evenly distributed among multiple statements? Where do we start? Is there a better way to identify the starting point for the tuning of a resource-bound system?
This presentation will explain when the AWR reports are misleading and how we can take a look at the data stored in AWR from a different angle to determine the top consumers. Discussion will include a practical demonstration using scripts for AWR mining that attendees can apply to their own challenging database performance tuning problems.
Scripts and the demo log: https://github.com/MarisElsins/TOOLS/tree/master/SQL/C15LV_AWR
Microsoft SQL Server internals & architectureKevin Kline
From noted SQL Server expert and author Kevin Kline - Let’s face it. You can effectively do many IT jobs related to Microsoft SQL Server without knowing the internals of how SQL Server works. Many great developers, DBAs, and designers get their day-to-day work completed on time and with reasonable quality while never really knowing what’s happening behind the scenes. But if you want to take your skills to the next level, it’s critical to know SQL Server’s internal processes and architecture. This session will answer questions like:
- What are the various areas of memory inside of SQL Server?
- How are queries handled behind the scenes?
- What does SQL Server do with procedural code, like functions, procedures, and triggers?
- What happens during checkpoints? Lazywrites?
- How are IOs handled with regards to transaction logs and database?
- What happens when transaction logs and databases grow or shrinks?
This fast paced session will take you through many aspects of the internal operations of SQL Server and, for those topics we don’t cover, will point you to resources where you can get more information.
Azure SQL Database (SQL DB) is a database-as-a-service (DBaaS) that provides nearly full T-SQL compatibility so you can gain tons of benefits for new databases or by moving your existing databases to the cloud. Those benefits include provisioning in minutes, built-in high availability and disaster recovery, predictable performance levels, instant scaling, and reduced overhead. And gone will be the days of getting a call at 3am because of a hardware failure. If you want to make your life easier, this is the presentation for you.
Attendees in this session will enhance their skills and job relevancy by gaining new knowledge and skills using the Oracle Public Cloud within their job role through actual use cases .
Will detail how backup to the cloud can be used to meet different needs of their organization and how to justify use of new technology within their business. Learn how to create a storage container, setup OS secure authentication and configure RMAN to use the Oracle Cloud. Perform a backup to the Oracle Cloud and recover from it back to your on-premise server. Learn how to migrate from an on-premise Oracle Database 12c to a pluggable Oracle Database 12c (PDB) in the Oracle Cloud. Then move a PDB in which Developers have completed their work in the Oracle Cloud back on-premise and into production
Microsoft SQL Server Distributing Data with R2 BertucciMark Ginnebaugh
This presentation by Paul Bertucci describes an ordered method of determining what users need and which SQL Server data distribution solution is best to use.
There are many needs of data throughout an organization. Getting data to those who need it can be accomplished many different ways with SQL Server 2008 technologies.
This presentation covers data replication, database mirroring and snapshots, older methods such as log shipping and linked servers, and new methods such as using the sync framework.
You'll Learn
* Each of SQL Server’s main data distribution solutions
* How to determine which solution to use to solve different purposes
Temporal Tables, Transparent Archiving in DB2 for z/OS and IDAACuneyt Goksu
The document discusses several data archiving solutions for z/OS systems including temporal tables, transparent archiving, and IDAA technology. Temporal tables allow querying and updating historical data using system time periods. Transparent archiving moves old data to other storage platforms while still allowing dynamic queries. IDAA provides accelerated query performance for temporal tables by routing queries to an accelerator system. The solutions can be combined for different use cases depending on data retention and access needs.
Oracle RAC 12c and Policy-Managed Databases, a Technical OverviewLudovico Caldara
Oracle RAC Policy-Managed Database (PMD) is a powerful but so far rarely used feature introduced in Oracle Database 11g
Release 2 to automate the instance administration in a dynamic, multi-node cluster.
The aim of this presentation is to review how PMD works, how to implement and administer it successfully, and how to
benefit from this technology compared to the traditional administrator-managed deployment. During the session, the new
features of the 12c Grid Infrastructure related to PMD are highlighted.
If you are seeking ways to improve your cloud database environment with EDB Postgres, this presentation reviews how you can create a Database-as-a-Service (DBaaS) with EDB Postgres on AWS.
This presentation outlines how EDB Ark can play a key role in your digital transformation with more agility and speed.
It highlights:
● How EDB Ark can integrate with your existing AWS environment and other clouds
● How you can automate your database deployments to instantly spin up new databases
● How to manage your database environment easier using the same GUI for all clouds
● How to boost developer efficiency and satisfaction
Whether your database is currently in the cloud or you are considering the cloud as an option, this presentation will provide you with the information you need to evaluate EDB Postgres and EDB Ark.
The recording of this presentation includes a demonstration. Visit www.edbpostgres.com > resources > webcasts
Fast, Flexible Application Development with Oracle Database Cloud ServiceGustavo Rene Antunez
Developing applications to run on the most important Database Manager in the world ? Why not do it in the cloud? With Oracle Database Cloud Service, developers can quickly and easily access the power and flexibility of the Oracle database in the cloud. With a choice between an instance or a dedicated database with full administrative control, or a schema dedicated to a development platform and full deployment managed by Oracle, developers can decide how much control they have over their development environments. Attend this session to learn more about the features and benefits of Oracle Database Cloud.
Oracle RAC 12c (12.1.0.2) Operational Best Practices - A result of true colla...Markus Michalewicz
This is the latest version of the Oracle RAC 12c (12.1.0.2) Operational Best Practices presentation as shown during IOUG / Collaborate15. As best practices are a result of true collaboration this will probably be the last version before OOW 2015.
Boost your Oracle RAC manageability with Policy-Managed DatabasesLudovico Caldara
Oracle RAC Policy-Managed Database (PMD) is a powerful but so far rarely used feature introduced in Oracle Database 11g
Release 2 to automate the instance administration in a dynamic, multi-node cluster.
The aim of this presentation is to review how PMD works, how to implement and administer it successfully, and how to
benefit from this technology compared to the traditional administrator-managed deployment.
These slides are from the session I've done at Collaborate14, but re-branded with my company's template.
Microsoft SQL server 2017 Level 300 technical deckGeorge Walters
This deck covers new features in SQL Server 2017, as well as carryover features from 2012 onwards. This includes high availability, columnstore, alwayson, In-memory tables, and other enterprise features.
The document discusses new features in Oracle Database 12c including the introduction of a multitenant architecture. Key points include:
- 12c introduces a multitenant architecture that allows a single database to host many pluggable databases (PDBs). This improves consolidation and resource utilization.
- PDBs can be quickly provisioned from seed databases or cloned from other PDBs. Common operations can be performed at the container database level.
- Adaptive execution plans allow queries to dynamically switch plans at runtime if optimizer estimates prove inaccurate based on statistics collected during execution.
This document discusses different approaches to converting a non-RAC Oracle database to a Real Application Clusters (RAC) configuration for high availability and scalability. It describes using Database Configuration Assistant (DBCA), RCONFIG, Oracle Enterprise Manager, and RMAN to perform the conversion. Best practices are provided such as using Automatic Storage Management (ASM) for shared storage, testing changes in a test environment first, and configuring redundant network interfaces. Backing up RAC databases with RMAN is also covered, highlighting features like automatic restore of control files and incremental backups.
This document discusses upgrading to Oracle Database 19c and migrating to Oracle Multitenant. It provides an overview of key features such as being able to have 3 user-created PDBs without a Multitenant license in 19c. It also demonstrates how to use AutoUpgrade to perform an upgrade and migration to Multitenant with a single command. The document highlights various Multitenant concepts such as resource sharing, connecting to containers, and cloning PDBs.
January 2015 HUG: Using HBase Co-Processors to Build a Distributed, Transacti...Yahoo Developer Network
Monte Zweben Co-Founder and CEO of Splice Machine, will discuss how to use HBase co-processors to build an ANSI-99 SQL database with 1) parallelization of SQL execution plans, 2) ACID transactions with snapshot isolation and 3) consistent secondary indexing.
Transactions are critical in traditional RDBMSs because they ensure reliable updates across multiple rows and tables. Most operational applications require transactions, but even analytics systems use transactions to reliably update secondary indexes after a record insert or update.
In the Hadoop ecosystem, HBase is a key-value store with real-time updates, but it does not have multi-row, multi-table transactions, secondary indexes or a robust query language like SQL. Combining SQL with a full transactional model over HBase opens a whole new set of OLTP and OLAP use cases for Hadoop that was traditionally reserved for RDBMSs like MySQL or Oracle. However, a transactional HBase system has the advantage of scaling out with commodity servers, leading to a 5x-10x cost savings over traditional databases like MySQL or Oracle.
HBase co-processors, introduced in release 0.92, provide a flexible and high-performance framework to extend HBase. In this talk, we show how we used HBase co-processors to support a full ANSI SQL RDBMS without modifying the core HBase source. We will discuss how endpoint transactions are used to serialize SQL execution plans over to regions so that computation is local to where the data is stored. Additionally, we will show how observer co-processors simultaneously support both transactions and secondary indexing.
The talk will also discuss how Splice Machine extended the work of Google Percolator, Yahoo Labs’ OMID, and the University of Waterloo on distributed snapshot isolation for transactions. Lastly, performance benchmarks will be provided, including full TPC-C and TPC-H results that show how Hadoop/HBase can be a replacement of traditional RDBMS solutions.
Delivering Pluggable Database as a ServicePete Sharman
This document discusses Oracle Enterprise Manager 12c and its capabilities for providing database as a service (DBaaS). It describes DBaaS architectures like virtual machines, dedicated databases, and pluggable databases. It also discusses concepts like zones, pools, and service templates that allow flexible provisioning of database and middleware infrastructure in private and public clouds. Several use cases are provided to illustrate how DBaaS can be implemented using these concepts to meet the needs of different organizations and applications.
SQL Server Reporting Services Disaster Recovery webinarDenny Lee
This is the PASS DW|BI virtual chapter webinar on SQL Server Reporting Services Disaster Recovery with Ayad Shammout and myself - hosted by Julie Koesmarno (@mssqlgirl)
Building the Perfect SharePoint 2010 Farm - MS Days Bulgaria 2012Michael Noel
This document discusses best practices for building a highly available and optimized SharePoint 2010 farm. It covers farm architecture including recommended server roles and sizing. It also discusses virtualization options and performance monitoring considerations. The document outlines strategies for data management including content database distribution, remote BLOB storage, SQL database optimization, and maintenance plans. Finally, it compares high availability and disaster recovery options for SQL Server like AlwaysOn availability groups and failover clustering.
This document discusses SQL Server 2019 and provides the following information:
1. It introduces Javier Villegas, a technical speaker and SQL Server expert.
2. It outlines several new capabilities in SQL Server 2019 including artificial intelligence, container support, and big data analytics capabilities using Apache Spark.
3. It compares editions and capabilities of SQL Server on Windows and Linux and notes they are largely the same.
Mining the AWR: Alternative Methods for Identification of the Top SQLs (inclu...Maris Elsins
A typical tuning session on a resource-constrained system starts with a search for "low-hanging fruit." In a CPU-bound database system, it would be the SQL that uses CPU the most, in an I/O-bound system, the SQL doing the most physical reads, and so on. Tuning the TOP statements often allows us to free large portions of the utilized resources and remove bottlenecks. Often, we can use AWR reports to quickly identify the SQL_IDs of the top statements in the database. But what if the AWR report reveals no "low-hanging fruit," and the resource usage is evenly distributed among multiple statements? Where do we start? Is there a better way to identify the starting point for the tuning of a resource-bound system?
This presentation will explain when the AWR reports are misleading and how we can take a look at the data stored in AWR from a different angle to determine the top consumers. Discussion will include a practical demonstration using scripts for AWR mining that attendees can apply to their own challenging database performance tuning problems.
Scripts and the demo log: https://github.com/MarisElsins/TOOLS/tree/master/SQL/C15LV_AWR
Microsoft SQL Server internals & architectureKevin Kline
From noted SQL Server expert and author Kevin Kline - Let’s face it. You can effectively do many IT jobs related to Microsoft SQL Server without knowing the internals of how SQL Server works. Many great developers, DBAs, and designers get their day-to-day work completed on time and with reasonable quality while never really knowing what’s happening behind the scenes. But if you want to take your skills to the next level, it’s critical to know SQL Server’s internal processes and architecture. This session will answer questions like:
- What are the various areas of memory inside of SQL Server?
- How are queries handled behind the scenes?
- What does SQL Server do with procedural code, like functions, procedures, and triggers?
- What happens during checkpoints? Lazywrites?
- How are IOs handled with regards to transaction logs and database?
- What happens when transaction logs and databases grow or shrinks?
This fast paced session will take you through many aspects of the internal operations of SQL Server and, for those topics we don’t cover, will point you to resources where you can get more information.
Azure SQL Database (SQL DB) is a database-as-a-service (DBaaS) that provides nearly full T-SQL compatibility so you can gain tons of benefits for new databases or by moving your existing databases to the cloud. Those benefits include provisioning in minutes, built-in high availability and disaster recovery, predictable performance levels, instant scaling, and reduced overhead. And gone will be the days of getting a call at 3am because of a hardware failure. If you want to make your life easier, this is the presentation for you.
Attendees in this session will enhance their skills and job relevancy by gaining new knowledge and skills using the Oracle Public Cloud within their job role through actual use cases .
Will detail how backup to the cloud can be used to meet different needs of their organization and how to justify use of new technology within their business. Learn how to create a storage container, setup OS secure authentication and configure RMAN to use the Oracle Cloud. Perform a backup to the Oracle Cloud and recover from it back to your on-premise server. Learn how to migrate from an on-premise Oracle Database 12c to a pluggable Oracle Database 12c (PDB) in the Oracle Cloud. Then move a PDB in which Developers have completed their work in the Oracle Cloud back on-premise and into production
Microsoft SQL Server Distributing Data with R2 BertucciMark Ginnebaugh
This presentation by Paul Bertucci describes an ordered method of determining what users need and which SQL Server data distribution solution is best to use.
There are many needs of data throughout an organization. Getting data to those who need it can be accomplished many different ways with SQL Server 2008 technologies.
This presentation covers data replication, database mirroring and snapshots, older methods such as log shipping and linked servers, and new methods such as using the sync framework.
You'll Learn
* Each of SQL Server’s main data distribution solutions
* How to determine which solution to use to solve different purposes
Temporal Tables, Transparent Archiving in DB2 for z/OS and IDAACuneyt Goksu
The document discusses several data archiving solutions for z/OS systems including temporal tables, transparent archiving, and IDAA technology. Temporal tables allow querying and updating historical data using system time periods. Transparent archiving moves old data to other storage platforms while still allowing dynamic queries. IDAA provides accelerated query performance for temporal tables by routing queries to an accelerator system. The solutions can be combined for different use cases depending on data retention and access needs.
Oracle RAC 12c and Policy-Managed Databases, a Technical OverviewLudovico Caldara
Oracle RAC Policy-Managed Database (PMD) is a powerful but so far rarely used feature introduced in Oracle Database 11g
Release 2 to automate the instance administration in a dynamic, multi-node cluster.
The aim of this presentation is to review how PMD works, how to implement and administer it successfully, and how to
benefit from this technology compared to the traditional administrator-managed deployment. During the session, the new
features of the 12c Grid Infrastructure related to PMD are highlighted.
If you are seeking ways to improve your cloud database environment with EDB Postgres, this presentation reviews how you can create a Database-as-a-Service (DBaaS) with EDB Postgres on AWS.
This presentation outlines how EDB Ark can play a key role in your digital transformation with more agility and speed.
It highlights:
● How EDB Ark can integrate with your existing AWS environment and other clouds
● How you can automate your database deployments to instantly spin up new databases
● How to manage your database environment easier using the same GUI for all clouds
● How to boost developer efficiency and satisfaction
Whether your database is currently in the cloud or you are considering the cloud as an option, this presentation will provide you with the information you need to evaluate EDB Postgres and EDB Ark.
The recording of this presentation includes a demonstration. Visit www.edbpostgres.com > resources > webcasts
Fast, Flexible Application Development with Oracle Database Cloud ServiceGustavo Rene Antunez
Developing applications to run on the most important Database Manager in the world ? Why not do it in the cloud? With Oracle Database Cloud Service, developers can quickly and easily access the power and flexibility of the Oracle database in the cloud. With a choice between an instance or a dedicated database with full administrative control, or a schema dedicated to a development platform and full deployment managed by Oracle, developers can decide how much control they have over their development environments. Attend this session to learn more about the features and benefits of Oracle Database Cloud.
Oracle RAC 12c (12.1.0.2) Operational Best Practices - A result of true colla...Markus Michalewicz
This is the latest version of the Oracle RAC 12c (12.1.0.2) Operational Best Practices presentation as shown during IOUG / Collaborate15. As best practices are a result of true collaboration this will probably be the last version before OOW 2015.
Boost your Oracle RAC manageability with Policy-Managed DatabasesLudovico Caldara
Oracle RAC Policy-Managed Database (PMD) is a powerful but so far rarely used feature introduced in Oracle Database 11g
Release 2 to automate the instance administration in a dynamic, multi-node cluster.
The aim of this presentation is to review how PMD works, how to implement and administer it successfully, and how to
benefit from this technology compared to the traditional administrator-managed deployment.
These slides are from the session I've done at Collaborate14, but re-branded with my company's template.
Microsoft SQL server 2017 Level 300 technical deckGeorge Walters
This deck covers new features in SQL Server 2017, as well as carryover features from 2012 onwards. This includes high availability, columnstore, alwayson, In-memory tables, and other enterprise features.
The document discusses new features in Oracle Database 12c including the introduction of a multitenant architecture. Key points include:
- 12c introduces a multitenant architecture that allows a single database to host many pluggable databases (PDBs). This improves consolidation and resource utilization.
- PDBs can be quickly provisioned from seed databases or cloned from other PDBs. Common operations can be performed at the container database level.
- Adaptive execution plans allow queries to dynamically switch plans at runtime if optimizer estimates prove inaccurate based on statistics collected during execution.
This document discusses different approaches to converting a non-RAC Oracle database to a Real Application Clusters (RAC) configuration for high availability and scalability. It describes using Database Configuration Assistant (DBCA), RCONFIG, Oracle Enterprise Manager, and RMAN to perform the conversion. Best practices are provided such as using Automatic Storage Management (ASM) for shared storage, testing changes in a test environment first, and configuring redundant network interfaces. Backing up RAC databases with RMAN is also covered, highlighting features like automatic restore of control files and incremental backups.
This document discusses upgrading to Oracle Database 19c and migrating to Oracle Multitenant. It provides an overview of key features such as being able to have 3 user-created PDBs without a Multitenant license in 19c. It also demonstrates how to use AutoUpgrade to perform an upgrade and migration to Multitenant with a single command. The document highlights various Multitenant concepts such as resource sharing, connecting to containers, and cloning PDBs.
January 2015 HUG: Using HBase Co-Processors to Build a Distributed, Transacti...Yahoo Developer Network
Monte Zweben Co-Founder and CEO of Splice Machine, will discuss how to use HBase co-processors to build an ANSI-99 SQL database with 1) parallelization of SQL execution plans, 2) ACID transactions with snapshot isolation and 3) consistent secondary indexing.
Transactions are critical in traditional RDBMSs because they ensure reliable updates across multiple rows and tables. Most operational applications require transactions, but even analytics systems use transactions to reliably update secondary indexes after a record insert or update.
In the Hadoop ecosystem, HBase is a key-value store with real-time updates, but it does not have multi-row, multi-table transactions, secondary indexes or a robust query language like SQL. Combining SQL with a full transactional model over HBase opens a whole new set of OLTP and OLAP use cases for Hadoop that was traditionally reserved for RDBMSs like MySQL or Oracle. However, a transactional HBase system has the advantage of scaling out with commodity servers, leading to a 5x-10x cost savings over traditional databases like MySQL or Oracle.
HBase co-processors, introduced in release 0.92, provide a flexible and high-performance framework to extend HBase. In this talk, we show how we used HBase co-processors to support a full ANSI SQL RDBMS without modifying the core HBase source. We will discuss how endpoint transactions are used to serialize SQL execution plans over to regions so that computation is local to where the data is stored. Additionally, we will show how observer co-processors simultaneously support both transactions and secondary indexing.
The talk will also discuss how Splice Machine extended the work of Google Percolator, Yahoo Labs’ OMID, and the University of Waterloo on distributed snapshot isolation for transactions. Lastly, performance benchmarks will be provided, including full TPC-C and TPC-H results that show how Hadoop/HBase can be a replacement of traditional RDBMS solutions.
Delivering Pluggable Database as a ServicePete Sharman
This document discusses Oracle Enterprise Manager 12c and its capabilities for providing database as a service (DBaaS). It describes DBaaS architectures like virtual machines, dedicated databases, and pluggable databases. It also discusses concepts like zones, pools, and service templates that allow flexible provisioning of database and middleware infrastructure in private and public clouds. Several use cases are provided to illustrate how DBaaS can be implemented using these concepts to meet the needs of different organizations and applications.
SQL Server Reporting Services Disaster Recovery webinarDenny Lee
This is the PASS DW|BI virtual chapter webinar on SQL Server Reporting Services Disaster Recovery with Ayad Shammout and myself - hosted by Julie Koesmarno (@mssqlgirl)
Building the Perfect SharePoint 2010 Farm - MS Days Bulgaria 2012Michael Noel
This document discusses best practices for building a highly available and optimized SharePoint 2010 farm. It covers farm architecture including recommended server roles and sizing. It also discusses virtualization options and performance monitoring considerations. The document outlines strategies for data management including content database distribution, remote BLOB storage, SQL database optimization, and maintenance plans. Finally, it compares high availability and disaster recovery options for SQL Server like AlwaysOn availability groups and failover clustering.
The document discusses several high availability and disaster recovery options for SQL Server including failover clustering, database mirroring, log shipping, and replication. It provides examples of how different companies have implemented these technologies depending on their requirements. Key factors that influence architecture choices are downtime tolerance, deployment of technologies, and operational procedures. The document also covers SQL Server upgrade processes and how to move databases to a new datacenter while maintaining high availability.
Building and Deploying Large Scale SSRS using Lessons Learned from Customer D...Denny Lee
This document discusses lessons learned from deploying large scale SQL Server Reporting Services (SSRS) environments based on customer scenarios. It covers the key aspects of success, scaling out the architecture, performance optimization, and troubleshooting. Scaling out involves moving report catalogs to dedicated servers and using a scale out deployment architecture. Performance is optimized through configurations like disabling report history and tuning memory settings. Troubleshooting utilizes logs, monitoring, and diagnosing issues like out of memory errors.
Getting Started with Managed Database Services on AWS - September 2016 Webina...Amazon Web Services
On AWS you can choose from a variety of managed database services that save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We'll explain the fundamentals of Amazon RDS, a managed relational database service in the cloud; Amazon DynamoDB, a fully managed NoSQL database service; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Learning Objectives:
• Overview of managed database services available on AWS
• How to combine them for high-performance cost effective architectures
• Learn how to choose between the AWS database services based on the use case
Who Should Attend:
• IT Managers, DBAs, Enterprise and Solution Architects, IT Managers, DBAs, Enterprise and Solution Architects, Devops Engineers and Developers
Selecting the Right AWS Database Solution - AWS 2017 Online Tech TalksAmazon Web Services
• Get an overview of managed database services available on AWS
• Learn how to combine them for high-performance cost effective architectures
• Learn how to choose between the AWS database services based on your use case
On AWS you can choose from a variety of managed database services that save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We'll explain the fundamentals of Amazon RDS, a managed relational database service in the cloud; Amazon DynamoDB, a fully managed NoSQL database service; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be economical. We will cover how each service might help support your application and how to get started.
AWS June Webinar Series - Getting Started: Amazon RedshiftAmazon Web Services
Amazon Redshift is a fast, fully-managed petabyte-scale data warehouse service, for less than $1,000 per TB per year. In this presentation, you'll get an overview of Amazon Redshift, including how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. Learn how, with just a few clicks in the AWS Management Console, you can set up with a fully functional data warehouse, ready to accept data without learning any new languages and easily plugging in with the existing business intelligence tools and applications you use today. This webinar is ideal for anyone looking to gain deeper insight into their data, without the usual challenges of time, cost and effort. In this webinar, you will learn: • Understand what Amazon Redshift is and how it works • Create a data warehouse interactively through the AWS Management Console • Load some data into your new Amazon Redshift data warehouse from S3 Who Should Attend • IT professionals, developers, line-of-business managers
- Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service in the cloud. It uses massively parallel processing and columnar storage to enable fast queries on large data sets for a fraction of the cost of traditional data warehousing.
- Some key features include automatic scaling, continuous backups, integrated security and access controls, integration with other AWS services like S3 and DynamoDB, and simple point-and-click management.
- Customers are seeing significant improvements in performance, often 50-100x faster than alternatives like Hive, as well as large cost reductions of up to 80% compared to on-premises data warehousing.
AWS ofrece una gran variedad de servicios de base de datos que se adaptan a los requisitos de su aplicación. Los servicios de bases de datos están totalmente administrados y se pueden implementar en cuestión de minutos con tan solo unos clics. Los servicios de AWS incluyen Amazon Relational Database Service (Amazon RDS), compatible con 6 motores de bases de datos comunes, Amazon Aurora, base de datos relacional compatible con MySQL con un desempeño 5 veces superior, Amazon DynamoDB, servicio de bases de datos NoSQL rápido y flexible, Amazon Redshift, almacén de datos a escala de petabytes, y Amazon Elasticache, servicio de caché en memoria compatible con Memcached y Redis. AWS también proporciona AWS Database Migration Service, un servicio que permite migrar las bases de datos a la nube de AWS de forma sencilla y rentable.
Deep Dive on MySQL Databases on AWS - AWS Online Tech TalksAmazon Web Services
RDS provides fully managed MySQL, MariaDB, and Aurora database engines. It handles common database tasks to reduce management overhead and allows focusing on applications. Key features include automatic failover, backups/snapshots, scaling, security, compliance support, and integration across AWS services. Best practices involve leveraging multi-AZ, read replicas, monitoring, and storage optimization based on workload needs. Migration options include the Database Migration Service and Schema Conversion Tool.
This document provides an overview and use cases for Amazon Redshift, a fast, fully managed, petabyte-scale data warehouse service from Amazon Web Services. It summarizes Redshift's features including columnar storage, data compression, and massively parallel query processing. It also provides examples of how Redshift is used by companies to reduce costs, improve query performance, and scale their data warehousing needs. Specific use cases and customers of Redshift are highlighted.
Maximizing performance via tuning and optimizationMariaDB plc
Maximizing Performance via Tuning and Optimization outlines best practices for optimizing MariaDB server performance. It discusses:
- Defining service level agreements and metrics to monitor against them
- When to tune based on schema, query, or system changes
- Ensuring server, storage, network and OS settings support database needs
- Configuring connection pooling and threads to manage load
- Common MariaDB configuration settings that impact performance
- Query tuning techniques like indexing, monitoring tools, and database design
Maximizing performance via tuning and optimizationMariaDB plc
Maximizing performance via tuning and optimization involves:
- Defining service level agreements and translating them to database transactions.
- Capturing metrics on business, application, and database transactions to identify bottlenecks.
- Tuning from the start and periodically reviewing production systems for changes.
- Optimizing server, storage, network and OS settings as well as MariaDB configuration settings like buffer pool size, query cache size, and connection settings.
- Analyzing slow queries, indexing appropriately, and monitoring tools like Performance Schema.
- Designing databases and choosing optimal data types.
Ultimate SharePoint Infrastructure Best Practises Session - Isle of Man Share...Michael Noel
This document summarizes best practices for SharePoint infrastructure design presented by Michael Noel. It discusses small, medium, and large farm models with separate web, app, and database servers. Hybrid cloud scenarios including one-way and two-way topologies are presented. Ensuring high availability through techniques like SQL AlwaysOn, database mirroring, and network load balancing is also covered. The presentation concludes with discussions of security best practices, documentation, and virtualization performance monitoring.
This document summarizes a presentation by Kevin Kline on strategies for addressing common SQL Server challenges. The presentation covered topics such as tuning disk I/O, managing very large databases, and an overview of Quest software solutions for SQL Server monitoring and performance. Key points included strategies for tiered storage, partitioning very large databases, monitoring disk queue lengths and page reads/writes in SQL Server.
Amazon Redshift é um serviço gerenciado que lhe dá um Data Warehouse, pronto para usar. Você se preocupa com carregar dados e utilizá-lo. Os detalhes de infraestrutura, servidores, replicação, backup são administrados pela AWS.
How to Set Up ApsaraDB for RDS on Alibaba CloudAlibaba Cloud
RDS is Alibaba Cloud's relational database service that provides a managed database service. It offers high availability, high performance, and scalability. Key benefits include usability through easy deployment and management, security through features like IP whitelisting and SQL attack protection, and availability through an architecture with primary and standby instances in different zones for failover. RDS instances can be easily scaled up or down and offer backups, read replicas, and temporary instances for recovery.
Amazon Web Services - Relational Database Service Meetupcyrilkhairallah
The document discusses Amazon Relational Database Service (RDS), a managed database service. It provides an overview of RDS and how it can be used to deploy, operate, and scale databases in the cloud more easily without manual administration. Key topics covered include how to scale databases with RDS, optimize costs using reserved instances, monitor databases with CloudWatch, take automated backups, and perform other administrative tasks without managing the underlying infrastructure.
In this presentation, you will get a look under the covers of Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse service for less than $1,000 per TB per year. Learn how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. We'll also walk through techniques for optimizing performance and, you’ll hear from a specific customer and their use case to take advantage of fast performance on enormous datasets leveraging economies of scale on the AWS platform.
In this presentation, you will get a look under the covers of Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse service for less than $1,000 per TB per year. Learn how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. We'll also walk through techniques for optimizing performance and, you’ll hear from a specific customer and their use case to take advantage of fast performance on enormous datasets leveraging economies of scale on the AWS platform.
Similar to SQL Server Reporting Services Disaster Recovery Webinar (20)
Azure Cosmos DB: Globally Distributed Multi-Model Database ServiceDenny Lee
Azure Cosmos DB is the industry's first globally distributed multi-model database service. Features of Cosmos DB include turn-key global distribution, elastic throughput and storage, multiple consistency models, and financially backed SLAs. As well, we are in preview for Table, Graph, and Spark Connector to Cosmos DB. Also includes healthcare scenarios!
Denny Lee introduced Azure DocumentDB, a fully managed NoSQL database service. DocumentDB provides elastic scaling of throughput and storage, global distribution with low latency reads and writes, and supports querying JSON documents with SQL and JavaScript. Common scenarios that benefit from DocumentDB include storing product catalogs, user profiles, sensor telemetry, and social graphs due to its ability to handle hierarchical and de-normalized data at massive scale.
SQL Server Integration Services Best PracticesDenny Lee
This is Thomas Kejser and my presentation at the Microsoft Business Intelligence Conference 2008 (October 2008) on SQL Server Integration Services Best Practices
SQL Server Reporting Services: IT Best PracticesDenny Lee
This is Lukasz Pawlowski and my presentation at the Microsoft Business Intelligence Conference 2008 (October 2008) on SQL Server Reporting Services: IT Best Practices
Differential Privacy Case Studies (CMU-MSR Mindswap on Privacy 2007)Denny Lee
This document discusses case studies using differential privacy to analyze sensitive data. It describes analyzing Windows Live user data to study web analytics and customer churn. Clinical researchers' perspectives on differential privacy were also examined. Researchers wanted unaffected statistics and the ability to access original data if needed. Future collaboration with OHSU aims to develop a healthcare template for applying differential privacy.
Designing, Building, and Maintaining Large Cubes using Lessons LearnedDenny Lee
This is Nicholas Dritsas, Eric Jacobsen, and my 2007 SQL PASS Summit presentation on designing, building, and maintaining large Analysis Services cubes
SQLCAT: A Preview to PowerPivot Server Best PracticesDenny Lee
The document discusses SQL Server Customer Advisory Team (SQLCAT) and their work on the largest and most complex SQL Server projects worldwide. It also discusses SQLCAT's sharing of technical content and driving of product requirements back into SQL Server based on customer needs. The document promotes an upcoming SQL Server Clinic where experts will be available to answer questions about architecting and designing future applications.
SQLCAT: Tier-1 BI in the World of Big DataDenny Lee
This document summarizes a presentation on tier-1 business intelligence (BI) in the world of big data. The presentation will cover Microsoft's BI capabilities at large scales, big data workloads from Yahoo and investment banks, Hadoop and the MapReduce framework, and extracting data out of big data systems into BI tools. It also shares a case study on Yahoo's advertising analytics platform that processes billions of rows daily from terabytes of data.
Jump Start into Apache Spark (Seattle Spark Meetup)Denny Lee
Denny Lee, Technology Evangelist with Databricks, will demonstrate how easily many Data Sciences and Big Data (and many not-so-Big Data) scenarios easily using Apache Spark. This introductory level jump start will focus on user scenarios; it will be demo heavy and slide light!
How Concur uses Big Data to get you to Tableau Conference On TimeDenny Lee
This is my presentation from Tableau Conference #Data14 as the Cloudera Customer Showcase - How Concur uses Big Data to get you to Tableau Conference On Time. We discuss Hadoop, Hive, Impala, and Spark within the context of Consolidation, Visualization, Insight, and Recommendation.
This is an excerpt of the "Tier-1 BI in the World of Big Data" by Thomas Kejser, Denny Lee, and Kenneth Lieu specific to the Yahoo! TAO Case Study published at: http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=710000001707
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
SQL Server Reporting Services Disaster Recovery Webinar
1. SSRS Disaster Recovery
PASS DW|BI Webinar
Ayad Shammout (@aashammout) and Denny Lee (@dennylee)
Hosted by Julie Koesmarno (@mssqlgirl)
2. Agenda
• Review of Scale Out Architectures
• It’s all about the Catalog
• SSRS Disaster Recovery Infrastructure
• Optimizing the Catalog with SQL Server 2012 Always On
3. Reporting Services Architecture
Typical One-Box Deployment
Report Server
NLB
Clients
RSDB
Flat Files,
OLE DB,
ODBC
SQL, AS,
DB2, Oracle,
Teradata, etc.
Clients
Data Sources (to report against)
RS Server
Report Catalog
Clients
4. Reporting Services Architecture
Remote Report Catalog = Higher Availability
Report Server
NLB
Clients
RSDB
Flat Files,
OLE DB,
ODBC
SQL, AS,
DB2, Oracle,
Teradata, etc.
Clients
Data Sources (to report against)
RS Server
Report Catalog
Clients
5. Reporting Services Architecture
Scale Out and High Availability Infrastructure
Clients
Clients
RS Server
RSDB
Flat Files,
OLE DB,
ODBC
SQL, AS,
DB2, Oracle,
Teradata, etc.
RS Server
Data Sources (to report against)
NLB
RS Server
Report Catalog
Clients
Reporting Scale Out Deployment
Report Server Cluster
6. Report Catalog
Architecture
Report Catalog
RSDB
Report Server Catalog (RSDB)
Stores all report metadata including report definitions,
report / history snapshots, scheduling, etc.
Report Server TempDB
Stores temporary snapshots while running reports
These databases can be a bottleneck
Optimize by applying standard SQL DB techniques
Catalog has a lot of I/O and transactions
– RS2005: Many inserts to ChunkData, SnapshotData, and SessionData
tables
– RS2008: Many inserts Segment; takes majority of transactions of
7. Report Catalog
Best Practices > Use a dedicated server
• Same server as SSRS Server
• Great for small environments
• In enterprise environments, too much resource contention
• Same server as data source database
• SQL resource contention (TempDB, plan cache, memory buffer pool)
between data source and RS catalogs
• As load increases need to monitor CPU, I/O, network resources, and
buffer pool
• Reduce resource contention by having a dedicated RS catalog server you
can tune.
• Apply high availability and disaster recovery procedures (e.g. clustering,
mirroring, log shipping) to protect the RSDB
8. Report Catalog
Best Practices > High Performance Disk
• Check out Predeployment I/O Best Practices
• Have more smaller size disks with faster rotation speeds (e.g. 15K RPM) vs. fewer larger
disks with slower rotations
• Maximize/balance I/O across ALL available spindles
• Separate disks between RSDB and RSTempDB
• RSDB a lot of small transactions (report metadata)
• RSTempDB has more (not as many) larger transactions
• Pre-grow your databases
• Stripe dB files to number of cores (0.25 – 1.0)
• Minimize allocation contention
• Easier to rebalance database when new LUNs are available
• Use RAID 10, not RAID 5
9. Report Catalog
Best Practices > Operations Best Practices
• Data in RSTempDB is highly volatile
• Report lifetime policy of data = SessionTimeout value (10min)
• CleanupCycleMinutes guides background cleanup thread
• Once session timeout reached, cleanup temporary snapshot from tempDB
• This is done every CleanupCycleMinutes
• Data is RSDB is long lived; should be backed up
• Backing Up and Restore Databases in SQL Server
• Optimizing Backup and Restore Performance in SQL Server
• Backing Up and Restore Encryption Keys
• Maintain your RS catalogs
• Remember, these are SQL databses
• E.g. Re-indexing catalog tables or updating stats may improve query performance
10. Report Catalog
Best Practices > Report Catalog Sizing
• RSDB database size
• Varies by number of reports published and number of history snapshots
• General rule of thumb:
• Moderate size report definition takes 100-200KB of disk space
• This is larger than the actual RDL as SSRS persists both RDL and compiled
binary
• Assume 5:1 compression ratio (e.g. 10MB of data, snapshot is 2MB in size)
• RSTempDB database size
• Varies by number of users whom are concurrently using the Report Servers
• Each live report execution generates report snapshot persisted in the RSTempDB
• General rule of thumb:
• 10-20% concurrency of user base, e.g. 1000 users, then max 200 concurrent
users.
• If most users are accessing 10MB reports, then you will need 400MB of storage
• 200 users x 10MB reports / 5:1 compression ratio= 400MB
12. Disaster Recovery Environment
Primary Data
Center
- SSRS servers
- Separate Report
Catalog
Content
Content Switch
- With own Failover Switch
cluster
Overall Infrastructure
Primary Data Center
SSRS
SSRS
SSRS
Failover Cluster
Disaster Recovery Site
- Closely duplicates primary
- Separate Geographic location
- Non-critical can utilize fewer
resources
- But Mission Critical ssytems
shoul dhave 1:1 duplication
RSDB
Montréalsql4
RSDB
Bostonsql4
RSDB
SSRS
SSRS
13. Disaster Recovery Environment
Network Configuration
Primary Data Center
SSRS
SSRS
SSRS
Network Config
- Ensure network
connectivity for clients
- Use content switch to
load balance and
redirect traffic
- Direct fiber between
PDC and DR to
minimize latencies
RSDB
Montréalsql4
Failover Cluster
SSRS
SSRS
Bostonsql4
RSDB
RSDB
Content Switch
Content Switch
14. Disaster Recovery Environment
Database Configuration
Primary Data Center
SSRS
SSRS
SSRS
Database Config
- Bostonsql4 is primary
RSDB instance w/
active/passive cluster
in PDC
- Content switch points
to sql4 alias
- Mirrored Montréalsql4
on DR site
RSDB
Montréalsql4
Failover Cluster
SSRS
SSRS
Bostonsql4
RSDB
RSDB
Content Switch
Content Switch
15. Disaster Recovery Environment
Database Configuration: Active / Active vs. Active /
Passive
Primary Data Center
SSRS
SSRS
SSRS
Advantages of Active/Passive Failover
Cluster
SSRS
- Allows other Active database instances
SSRS
to be located on Passive node
- Works well if passive node is not overutilized
RSDB
Montréalsql4
Failover Cluster
Bostonsql4
RSDB
RSDB
Content Switch
Content Switch
Not good if passive node has a lot of
traffic, concurrent users, etc. Then should
go with Active/Active cluster
16. Disaster Recovery Environment
Database Configuration: Asynchronous Mirroring
Async Mirroring
Content Switch
All RS Operations must connect
Content Switch
to RSDB for its metadata
Primary Data Center
Async Mirroring has minimal to
no impact on response time
performance
SSRS
SSRS
SSRS
SSRS
SSRS
OK to be async as report
metadata is not frequently
updated
Failover Cluster
RSDB
Montréalsql4
RSDB
Bostonsql4
RSDB
17. Disaster Recovery Environment
Database Configuration > Initializing Database Mirror
A relatively easy way to initialize a database mirroring setup is to:
1. Make full and transaction log backups of the Reporting
Services databases on the principal server.
2. Copy the backups over to the disaster recovery site, restoring
each Reporting Services database in no-recovery mode.
3. Set up the failover partner on the mirror (that is, the DR site)
before you set up the failover partner on the principal server.
18. Failover Scenarios
• Primary Data Center Reporting Servers go offline
• Primary Data Center RSDB Active server goes offline
• Primary Data Center RSDB cluster goes offline
• Primary Data Center Outage
19. Failover Scenario
Automatic Failover
Primary Data Center Reporting Servers go offline
Primary Data Center
SSRS
SSRS
SSRS
RSDB
Montréalsql4
Failover Cluster
SSRS
SSRS
Bostonsql4
RSDB
RSDB
Content Switch
Content Switch
20. Failover Scenario
Primary Data Center RSDB Active server goes offline
Primary Data Center
SSRS
SSRS
SSRS
RSDB
Automatic Failover
Montréalsql4
Failover Cluster
SSRS
SSRS
Bostonsql4
RSDB
RSDB
Content Switch
Content Switch
21. Failover Scenario
Primary Data Center RSDB Active server goes offline
Primary Data Center
SSRS
SSRS
Content Switch
Content Switch
SSRS
RSDB
RSDB
Manual Failover
Failover Cluster
Montréalsql4
Bostonsql4
RSDB
SSRS
SSRS
22. Failover Scenario
Primary Data Center Outage
Primary Data Center
SSRS
SSRS
SSRS
Content Switch suspends
primary IP addresses and
activates DR site IP
address so all connections
are redirected to DR site
RSDB
Montréalsql4
Failover Cluster
SSRS
SSRS
Bostonsql4
RSDB
RSDB
Content Switch
Content Switch
23. Failover Scenario
Primary Data Center Outage: Planned Outage
Primary Data Center
SSRS
SSRS
SSRS
Manually execute script to
manually switch to partner
database.
RSDB
Montréalsql4
Failover Cluster
SSRS
SSRS
Bostonsql4
RSDB
RSDB
Content Switch
Content Switch
24. Failover Scenario
Primary Data Center Outage: Unplanned Outage
Primary Data Center
SSRS
SSRS
SSRS
Manually failover script to
force service to switch with
possible data loss
RSDB
Montréalsql4
Failover Cluster
SSRS
SSRS
Bostonsql4
RSDB
RSDB
Content Switch
Content Switch
25. Disaster Recovery Environment
Database Configuration: Always On
Primary Data Center
SSRS
SSRS
Content Switch
Content Switch
SSRS
SSRS
AG Listener VNN
SSRS - Always On Availability Group
RSDB
Secondary Replica
Primary Replica
RSDB
SSRS
To ensure connectivity from the clients to the primary data center and the disaster recovery site, a common technique is to use a content switch to load-balance traffic within the individual sites as well as between the global sites. In the case of CareGroup Healthcare, a Cisco GSS is used as the content switch. As well, there is direct fiber network connectivity between the primary data center and the disaster recovery site to ensure minimal latencies for any communication between the two centers. If the primary site goes down for any reason, the content switch transparently redirects all client traffic to the disaster recovery set of Reporting Services servers. If the content switch is unavailable, the IP address can be changed at the DNS level. This latter change is a manual switch with a slightly longer network outage, which is due to the DNS cache clearing the old IP address and pointing to the new one.
Initializing Database MirrorA relatively easy way to initialize a database mirroring setup is to:1) Make full and transaction log backups of the Reporting Services databases on the principal server.2) Copy the backups over to the disaster recovery site, restoring each Reporting Services database in no-recovery mode.3) Set up the failover partner on the mirror (that is, the DR site) before you set up the failover partner on the principal server.
Initializing Database MirrorA relatively easy way to initialize a database mirroring setup is to:1) Make full and transaction log backups of the Reporting Services databases on the principal server.2) Copy the backups over to the disaster recovery site, restoring each Reporting Services database in no-recovery mode.3) Set up the failover partner on the mirror (that is, the DR site) before you set up the failover partner on the principal server.