Data Migration to Azure SQL and Azure SQL Managed Instance - June 19 2020Timothy McAliley
- This document provides information about upcoming webinars on migrating databases to Azure SQL services from June 19th through October 30th. It also lists resources for assessing databases and migrating them to Azure SQL Database or Managed Instance using tools like Azure Database Migration Service, Data Migration Assistant, and SQL Server Management Studio. Contact information is provided to RSVP or find more details on migration strategies and tools.
SQL Server High Availability and Disaster RecoveryMichael Poremba
High availability and disaster recovery strategies for Microsoft SQL Server databases are discussed. Key points include:
1) High availability aims to minimize downtime through redundant components and automatic failover, while disaster recovery protects against total data center outage through redundant systems and facilities.
2) Various SQL Server high availability options are examined, including database mirroring, log shipping, and failover clustering, each with different capabilities like automatic failover speed and hardware requirements.
3) Disaster recovery focuses on having a redundant system in a separate location that can be switched over to if the primary system fails. It requires strategies for backup, offsite storage, and recovery of data at the redundant location.
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
You’ve heard the marketing buzz, maybe you have been to a workshop and worked with some Spark, Delta, SQL, Python, or R, but you still need some help putting all the pieces together? Join us as we review some common techniques to build a lakehouse using Delta Lake, use SQL Analytics to perform exploratory analysis, and build connectivity for BI applications.
This document provides an overview of Azure SQL DB environments. It discusses the different types of cloud platforms including IaaS, PaaS and DBaaS. It summarizes the key features and benefits of Azure SQL DB including automatic backups, geo-replication for disaster recovery, and elastic pools for reducing costs. The document also covers pricing models, performance monitoring, automatic tuning capabilities, and security features of Azure SQL DB.
6 Nines: How Stripe keeps Kafka highly-available across the globe with Donny ...HostedbyConfluent
Availability is a key metric for any Kafka deployment, but when every event is critical the system must be centered around keeping publishers and consumers highly available, even when a Kafka cluster goes down. At Stripe our core business relies on Kafka, and as we outgrew a single Kafka cluster we had to build a multi-cluster system which would fit our needs while supporting a target of 99.9999% availability for our most critical use cases.
In this talk we’ll discuss our solution to this problem: an in-house proxy layer and multi-cluster toplogy which we’ve built and operated over the past 3 years. Our proxy layer enables multiple Kafka clusters to work in coordination across the globe, while hitting our ambitious availability targets and providing clean client abstractions.
In this talk we’ll discuss how our Kafka deployment provides: availability for both publishers and consumers in the face of cluster outages, increased security and observability, simplified cluster maintenance, and global routing for constraints such as data locality. We’ll highlight the benefits & tradeoffs of our approach, the design of our proxy layer, Kafka configuration decisions, and where we’re planning to go from here.
- Oracle Data Guard is a data protection and disaster recovery solution that maintains up to 9 synchronized standby databases to protect enterprise data from failures, disasters, errors, and corruptions.
- Data Guard uses redo apply and SQL apply technologies to synchronize primary and standby databases by transmitting redo logs from the primary and applying the redo logs on the standby databases.
- Data Guard allows role transitions like switchovers and failovers between primary and standby databases to minimize downtime during planned and unplanned outages.
This document provides an introduction and overview of Couchbase Server, a NoSQL document database. It describes Couchbase Server as the leading open source project focused on distributed database technology. It outlines key features such as easy scalability, always-on availability, flexible data modeling using JSON documents, and core features including clustering, replication, indexing and querying. The document also provides examples of basic write, read and update operations on a single node and cluster, adding nodes, handling node failures, indexing and querying capabilities, and cross data center replication.
Common Patterns of Multi Data-Center Architectures with Apache Kafkaconfluent
Whether you know you want to run Apache Kafka in multiple data centers and need practical advice or you are wondering why some organizations even need more than one cluster, this online talk is for you.
In this short session, we’ll discuss the basic patterns of multi-datacenter Kafka architectures, explore some of the use-cases enabled by each architecture and show how Confluent Enterprise products make these patterns easy to implement.
Visit www.confluent.io for more information.
Data Migration to Azure SQL and Azure SQL Managed Instance - June 19 2020Timothy McAliley
- This document provides information about upcoming webinars on migrating databases to Azure SQL services from June 19th through October 30th. It also lists resources for assessing databases and migrating them to Azure SQL Database or Managed Instance using tools like Azure Database Migration Service, Data Migration Assistant, and SQL Server Management Studio. Contact information is provided to RSVP or find more details on migration strategies and tools.
SQL Server High Availability and Disaster RecoveryMichael Poremba
High availability and disaster recovery strategies for Microsoft SQL Server databases are discussed. Key points include:
1) High availability aims to minimize downtime through redundant components and automatic failover, while disaster recovery protects against total data center outage through redundant systems and facilities.
2) Various SQL Server high availability options are examined, including database mirroring, log shipping, and failover clustering, each with different capabilities like automatic failover speed and hardware requirements.
3) Disaster recovery focuses on having a redundant system in a separate location that can be switched over to if the primary system fails. It requires strategies for backup, offsite storage, and recovery of data at the redundant location.
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
You’ve heard the marketing buzz, maybe you have been to a workshop and worked with some Spark, Delta, SQL, Python, or R, but you still need some help putting all the pieces together? Join us as we review some common techniques to build a lakehouse using Delta Lake, use SQL Analytics to perform exploratory analysis, and build connectivity for BI applications.
This document provides an overview of Azure SQL DB environments. It discusses the different types of cloud platforms including IaaS, PaaS and DBaaS. It summarizes the key features and benefits of Azure SQL DB including automatic backups, geo-replication for disaster recovery, and elastic pools for reducing costs. The document also covers pricing models, performance monitoring, automatic tuning capabilities, and security features of Azure SQL DB.
6 Nines: How Stripe keeps Kafka highly-available across the globe with Donny ...HostedbyConfluent
Availability is a key metric for any Kafka deployment, but when every event is critical the system must be centered around keeping publishers and consumers highly available, even when a Kafka cluster goes down. At Stripe our core business relies on Kafka, and as we outgrew a single Kafka cluster we had to build a multi-cluster system which would fit our needs while supporting a target of 99.9999% availability for our most critical use cases.
In this talk we’ll discuss our solution to this problem: an in-house proxy layer and multi-cluster toplogy which we’ve built and operated over the past 3 years. Our proxy layer enables multiple Kafka clusters to work in coordination across the globe, while hitting our ambitious availability targets and providing clean client abstractions.
In this talk we’ll discuss how our Kafka deployment provides: availability for both publishers and consumers in the face of cluster outages, increased security and observability, simplified cluster maintenance, and global routing for constraints such as data locality. We’ll highlight the benefits & tradeoffs of our approach, the design of our proxy layer, Kafka configuration decisions, and where we’re planning to go from here.
- Oracle Data Guard is a data protection and disaster recovery solution that maintains up to 9 synchronized standby databases to protect enterprise data from failures, disasters, errors, and corruptions.
- Data Guard uses redo apply and SQL apply technologies to synchronize primary and standby databases by transmitting redo logs from the primary and applying the redo logs on the standby databases.
- Data Guard allows role transitions like switchovers and failovers between primary and standby databases to minimize downtime during planned and unplanned outages.
This document provides an introduction and overview of Couchbase Server, a NoSQL document database. It describes Couchbase Server as the leading open source project focused on distributed database technology. It outlines key features such as easy scalability, always-on availability, flexible data modeling using JSON documents, and core features including clustering, replication, indexing and querying. The document also provides examples of basic write, read and update operations on a single node and cluster, adding nodes, handling node failures, indexing and querying capabilities, and cross data center replication.
Common Patterns of Multi Data-Center Architectures with Apache Kafkaconfluent
Whether you know you want to run Apache Kafka in multiple data centers and need practical advice or you are wondering why some organizations even need more than one cluster, this online talk is for you.
In this short session, we’ll discuss the basic patterns of multi-datacenter Kafka architectures, explore some of the use-cases enabled by each architecture and show how Confluent Enterprise products make these patterns easy to implement.
Visit www.confluent.io for more information.
This document discusses SQL Server 2019 and provides the following information:
1. It introduces Javier Villegas, a technical speaker and SQL Server expert.
2. It outlines several new capabilities in SQL Server 2019 including artificial intelligence, container support, and big data analytics capabilities using Apache Spark.
3. It compares editions and capabilities of SQL Server on Windows and Linux and notes they are largely the same.
SQL Server Profiler & Performance Monitor - SarabPreet SinghRishu Mehra
SQL Server Profiler and Performance Monitor are monitoring tools that can help identify and troubleshoot performance issues. Performance Monitor provides overall resource usage data to establish a baseline, while SQL Profiler tracks individual SQL statements. The tools can be synchronized to correlate profiler output with performance counters. Proper monitoring and baseline creation are important for performance issue diagnosis.
Splunk: Druid on Kubernetes with Druid-operatorImply
We went through the journey of deploying Apache Druid clusters on Kubernetes(K8s) and created a druid-operator (https://github.com/druid-io/druid-operator). This talk introduces the druid kubernetes operator, how to use it to deploy druid clusters and how it works under the hood. We will share how we use this operator to deploy Druid clusters at Splunk.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Druid is a complex stateful distributed system and a Druid cluster consists of multiple web services such as Broker, Historical, Coordinator, Overlord, MiddleManager etc each deployed with multiple replicas. Deploying a single web service on K8s requires creating few K8s resources via YAML files and it multiplies due to multiple services inside of a Druid cluster. Now doing it for multiple Druid clusters (dev, staging, production environments) makes it even more tedious and error prone.
K8s enables creation of application (such as Druid) specific extension, called “Operator”, that combines kubernetes and application specific knowledge into a reusable K8s extension that makes deploying complex applications simple.
Azure SQL Database & Azure SQL Data WarehouseMohamed Tawfik
This document provides an overview of Microsoft Azure Data Services and Azure SQL Database. It discusses Infrastructure as a Service (IaaS) versus Platform as a Service (PaaS), and highlights the opportunities in the Linux database market. It also discusses Microsoft's commitment to customer choice and partnerships with companies like Red Hat. The remainder of the document focuses on features of Azure SQL Database, including an overview of the DTU and vCore purchasing models, managed instances, backup and recovery, high availability options, elastic scalability, and data sync capabilities.
The document discusses Dell EMC VxRail, a hyper-converged appliance that combines servers, storage, and networking into a single system. It is presented as the standard in hyper-converged infrastructure and focuses on enabling business innovation through consumption-based buying which allows customers to focus resources on differentiating their business instead of IT integration. VxRail offers various configurations and scale options to match different use cases from small to large environments.
This presentation shows new features in SQL 2019, and a recap of features from SQL 2000 through 2017 as well. You would be wise to hear someone from Microsoft deliver this material.
AHF Insights provides a bird’s-eye view of the entire system with the ability to further drill down for root cause analysis and correlation. It provides the ability to check aspects like configuration, environment topology, metrics, logs, system configuration, system state, anomalies in the operating system, best practices compliance, root cause for issues, and fixes in some of the anomalous cases. Learn how to use this powerful tool to gain better insights into your system—and how to analyze large amounts of data in a short amount of time. This session includes a demo and examples of how you can use AHF Insights in your environment to solve issues faster.
The document discusses various Oracle performance monitoring tools including Oracle Enterprise Manager (OEM), Automatic Workload Repository (AWR), Automatic Database Diagnostic Monitor (ADDM), Active Session History (ASH), and eDB360. It provides overviews of each tool and examples of using AWR, ADDM, ASH and eDB360 for performance analysis through demos. The conclusions recommend OEM as the primary tool and how the other tools like AWR, ADDM and ASH complement it for deeper performance insights.
MoP(MQTT on Pulsar) - a Powerful Tool for Apache Pulsar in IoT - Pulsar Summi...StreamNative
MQTT (Message Queuing Telemetry Transport,) is a message protocol based on the pub/sub model with the advantages of compact message structure, low resource consumption, and high efficiency, which is suitable for IoT applications with low bandwidth and unstable network environments.
This session will introduce MQTT on Pulsar, which allows developers users of MQTT transport protocol to use Apache Pulsar. I will share the architecture, principles and future planning of MoP, to help you understand Apache Pulsar's capabilities and practices in the IoT industry.
This document discusses advanced Liferay architecture including clustering and high availability configurations. It provides sample setups using Apache load balancers with Tomcat and Liferay. It also outlines the key components that need to be addressed for a clustered Liferay implementation including the load balancer, centralized database, Ehcache, Lucene, document library, and more. The document then provides steps to test if clustering is working properly and discusses approaches to performance tuning Liferay including using a repeatable test script, load testing, and Java profiling.
The document discusses SQL Server migrations from Oracle databases. It highlights top reasons for customers migrating to SQL Server, including lower total cost of ownership, improved performance, and increased developer productivity. It also outlines concerns about migrations and introduces the SQL Server Migration Assistant (SSMA) tool, which automates components of database migrations to SQL Server.
Introducing Change Data Capture with DebeziumChengKuan Gan
This document discusses change data capture (CDC) and how it can be used to stream change events from databases. It introduces Debezium, an open source CDC platform that captures change events from transaction logs. Debezium supports capturing changes from multiple databases and transmitting them as a stream of events. The summary discusses how CDC can be used for data replication between databases, auditing, and in microservices architectures. It also covers deployment of CDC on Kubernetes using OpenShift.
Short introduction to different options for ETL & ELT in the Cloud with Microsoft Azure. This is a small accompanying set of slides for my presentations and blogs on this topic
Nabil Nawaz Oracle Oracle 12c Data Guard Deep Dive PresentationNabil Nawaz
This document provides an overview of Oracle Dataguard including:
- Dataguard allows configuration of up to 30 physical or logical standby databases for high availability and disaster recovery.
- It provides benefits such as offloading backups and reporting without impacting primary database performance.
- Key concepts include primary and standby databases, redo transport, and different protection modes for data replication.
A preview into SQL Server 2019 from Bob, Asad and my presentation at PASS Summit 2018 (Nov '18). We provided insights into what our public preview builds for SQL Server 2019 had in November.
This document summarizes a presentation on Oracle RAC (Real Application Clusters) internals with a focus on Cache Fusion. The presentation covers:
1. An overview of Cache Fusion and how it allows data to be shared across instances to enable scalability.
2. Dynamic re-mastering which adjusts where data is mastered based on access patterns to reduce messaging.
3. Techniques for handling contention including partitioning, connection pools, and separating redo logs.
4. Benefits of combining Oracle Multitenant and RAC such as aligning PDBs to instances.
5. How Oracle In-Memory Column Store fully integrates with RAC including fault tolerance features.
This presentation is for those of you who are interested in moving your on-prem SQL Server databases and servers to Azure virtual machines (VM’s) in the cloud so you can take advantage of all the benefits of being in the cloud. This is commonly referred to as a “lift and shift” as part of an Infrastructure-as-a-service (IaaS) solution. I will discuss the various Azure VM sizes and options, migration strategies, storage options, high availability (HA) and disaster recovery (DR) solutions, and best practices.
Spark (Structured) Streaming vs. Kafka StreamsGuido Schmutz
Independent of the source of data, the integration and analysis of event streams gets more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analyzed, often with many consumers or systems interested in all or part of the events. In this session we compare two popular Streaming Analytics solutions: Spark Streaming and Kafka Streams.
Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala.
Kafka Streams is the stream processing solution which is part of Kafka. It is provided as a Java library and by that can be easily integrated with any Java application.
This presentation shows how you can implement stream processing solutions with each of the two frameworks, discusses how they compare and highlights the differences and similarities.
SQL Server 2016 introduces several new features for In-Memory OLTP including support for up to 2 TB of user data in memory, system-versioned tables, row-level security, and Transparent Data Encryption. The in-memory processing has also been updated to support more T-SQL functionality such as foreign keys, LOB data types, outer joins, and subqueries. The garbage collection process for removing unused memory has also been improved.
The document discusses building a data warehouse in SQL Server. It provides an agenda that covers topics like an overview of data warehousing, data warehouse design, dimension and fact tables, and physical design. It also discusses components of a data warehousing solution like the data warehouse database, ETL processes, and security considerations.
This document discusses SQL Server 2019 and provides the following information:
1. It introduces Javier Villegas, a technical speaker and SQL Server expert.
2. It outlines several new capabilities in SQL Server 2019 including artificial intelligence, container support, and big data analytics capabilities using Apache Spark.
3. It compares editions and capabilities of SQL Server on Windows and Linux and notes they are largely the same.
SQL Server Profiler & Performance Monitor - SarabPreet SinghRishu Mehra
SQL Server Profiler and Performance Monitor are monitoring tools that can help identify and troubleshoot performance issues. Performance Monitor provides overall resource usage data to establish a baseline, while SQL Profiler tracks individual SQL statements. The tools can be synchronized to correlate profiler output with performance counters. Proper monitoring and baseline creation are important for performance issue diagnosis.
Splunk: Druid on Kubernetes with Druid-operatorImply
We went through the journey of deploying Apache Druid clusters on Kubernetes(K8s) and created a druid-operator (https://github.com/druid-io/druid-operator). This talk introduces the druid kubernetes operator, how to use it to deploy druid clusters and how it works under the hood. We will share how we use this operator to deploy Druid clusters at Splunk.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Druid is a complex stateful distributed system and a Druid cluster consists of multiple web services such as Broker, Historical, Coordinator, Overlord, MiddleManager etc each deployed with multiple replicas. Deploying a single web service on K8s requires creating few K8s resources via YAML files and it multiplies due to multiple services inside of a Druid cluster. Now doing it for multiple Druid clusters (dev, staging, production environments) makes it even more tedious and error prone.
K8s enables creation of application (such as Druid) specific extension, called “Operator”, that combines kubernetes and application specific knowledge into a reusable K8s extension that makes deploying complex applications simple.
Azure SQL Database & Azure SQL Data WarehouseMohamed Tawfik
This document provides an overview of Microsoft Azure Data Services and Azure SQL Database. It discusses Infrastructure as a Service (IaaS) versus Platform as a Service (PaaS), and highlights the opportunities in the Linux database market. It also discusses Microsoft's commitment to customer choice and partnerships with companies like Red Hat. The remainder of the document focuses on features of Azure SQL Database, including an overview of the DTU and vCore purchasing models, managed instances, backup and recovery, high availability options, elastic scalability, and data sync capabilities.
The document discusses Dell EMC VxRail, a hyper-converged appliance that combines servers, storage, and networking into a single system. It is presented as the standard in hyper-converged infrastructure and focuses on enabling business innovation through consumption-based buying which allows customers to focus resources on differentiating their business instead of IT integration. VxRail offers various configurations and scale options to match different use cases from small to large environments.
This presentation shows new features in SQL 2019, and a recap of features from SQL 2000 through 2017 as well. You would be wise to hear someone from Microsoft deliver this material.
AHF Insights provides a bird’s-eye view of the entire system with the ability to further drill down for root cause analysis and correlation. It provides the ability to check aspects like configuration, environment topology, metrics, logs, system configuration, system state, anomalies in the operating system, best practices compliance, root cause for issues, and fixes in some of the anomalous cases. Learn how to use this powerful tool to gain better insights into your system—and how to analyze large amounts of data in a short amount of time. This session includes a demo and examples of how you can use AHF Insights in your environment to solve issues faster.
The document discusses various Oracle performance monitoring tools including Oracle Enterprise Manager (OEM), Automatic Workload Repository (AWR), Automatic Database Diagnostic Monitor (ADDM), Active Session History (ASH), and eDB360. It provides overviews of each tool and examples of using AWR, ADDM, ASH and eDB360 for performance analysis through demos. The conclusions recommend OEM as the primary tool and how the other tools like AWR, ADDM and ASH complement it for deeper performance insights.
MoP(MQTT on Pulsar) - a Powerful Tool for Apache Pulsar in IoT - Pulsar Summi...StreamNative
MQTT (Message Queuing Telemetry Transport,) is a message protocol based on the pub/sub model with the advantages of compact message structure, low resource consumption, and high efficiency, which is suitable for IoT applications with low bandwidth and unstable network environments.
This session will introduce MQTT on Pulsar, which allows developers users of MQTT transport protocol to use Apache Pulsar. I will share the architecture, principles and future planning of MoP, to help you understand Apache Pulsar's capabilities and practices in the IoT industry.
This document discusses advanced Liferay architecture including clustering and high availability configurations. It provides sample setups using Apache load balancers with Tomcat and Liferay. It also outlines the key components that need to be addressed for a clustered Liferay implementation including the load balancer, centralized database, Ehcache, Lucene, document library, and more. The document then provides steps to test if clustering is working properly and discusses approaches to performance tuning Liferay including using a repeatable test script, load testing, and Java profiling.
The document discusses SQL Server migrations from Oracle databases. It highlights top reasons for customers migrating to SQL Server, including lower total cost of ownership, improved performance, and increased developer productivity. It also outlines concerns about migrations and introduces the SQL Server Migration Assistant (SSMA) tool, which automates components of database migrations to SQL Server.
Introducing Change Data Capture with DebeziumChengKuan Gan
This document discusses change data capture (CDC) and how it can be used to stream change events from databases. It introduces Debezium, an open source CDC platform that captures change events from transaction logs. Debezium supports capturing changes from multiple databases and transmitting them as a stream of events. The summary discusses how CDC can be used for data replication between databases, auditing, and in microservices architectures. It also covers deployment of CDC on Kubernetes using OpenShift.
Short introduction to different options for ETL & ELT in the Cloud with Microsoft Azure. This is a small accompanying set of slides for my presentations and blogs on this topic
Nabil Nawaz Oracle Oracle 12c Data Guard Deep Dive PresentationNabil Nawaz
This document provides an overview of Oracle Dataguard including:
- Dataguard allows configuration of up to 30 physical or logical standby databases for high availability and disaster recovery.
- It provides benefits such as offloading backups and reporting without impacting primary database performance.
- Key concepts include primary and standby databases, redo transport, and different protection modes for data replication.
A preview into SQL Server 2019 from Bob, Asad and my presentation at PASS Summit 2018 (Nov '18). We provided insights into what our public preview builds for SQL Server 2019 had in November.
This document summarizes a presentation on Oracle RAC (Real Application Clusters) internals with a focus on Cache Fusion. The presentation covers:
1. An overview of Cache Fusion and how it allows data to be shared across instances to enable scalability.
2. Dynamic re-mastering which adjusts where data is mastered based on access patterns to reduce messaging.
3. Techniques for handling contention including partitioning, connection pools, and separating redo logs.
4. Benefits of combining Oracle Multitenant and RAC such as aligning PDBs to instances.
5. How Oracle In-Memory Column Store fully integrates with RAC including fault tolerance features.
This presentation is for those of you who are interested in moving your on-prem SQL Server databases and servers to Azure virtual machines (VM’s) in the cloud so you can take advantage of all the benefits of being in the cloud. This is commonly referred to as a “lift and shift” as part of an Infrastructure-as-a-service (IaaS) solution. I will discuss the various Azure VM sizes and options, migration strategies, storage options, high availability (HA) and disaster recovery (DR) solutions, and best practices.
Spark (Structured) Streaming vs. Kafka StreamsGuido Schmutz
Independent of the source of data, the integration and analysis of event streams gets more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analyzed, often with many consumers or systems interested in all or part of the events. In this session we compare two popular Streaming Analytics solutions: Spark Streaming and Kafka Streams.
Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala.
Kafka Streams is the stream processing solution which is part of Kafka. It is provided as a Java library and by that can be easily integrated with any Java application.
This presentation shows how you can implement stream processing solutions with each of the two frameworks, discusses how they compare and highlights the differences and similarities.
SQL Server 2016 introduces several new features for In-Memory OLTP including support for up to 2 TB of user data in memory, system-versioned tables, row-level security, and Transparent Data Encryption. The in-memory processing has also been updated to support more T-SQL functionality such as foreign keys, LOB data types, outer joins, and subqueries. The garbage collection process for removing unused memory has also been improved.
The document discusses building a data warehouse in SQL Server. It provides an agenda that covers topics like an overview of data warehousing, data warehouse design, dimension and fact tables, and physical design. It also discusses components of a data warehousing solution like the data warehouse database, ETL processes, and security considerations.
Antonios Chatzipavlis presented on migrating SQL workloads to Azure. He discussed modernizing data platforms by discovering, assessing, planning, transforming, optimizing, testing and remediating. Key migration considerations include remaining, rehosting, refactoring, rearchitecting, rebuilding or replacing workloads. Tools for migrating data include Microsoft Assessment and Planning Toolkit, Data Migration Assistant, Database Experimentation Assistant, SQL Server Migration Assistant, and Azure Database Migration Service. Workloads can be migrated to Azure VMs, Azure SQL Databases or Azure SQL Managed Instances.
This document provides information about Satya Jayanty, an expert on SQL Server platform upgrades. It includes details about his experience, publications, speaking engagements, and community contributions related to SQL Server. The document also outlines some of the key topics he will cover in his presentation on SQL Server platform upgrades, including upgrade strategies, tools, best practices, and lessons learned from real-world upgrade scenarios.
Migrate a successful transactional database to azureIke Ellis
This slide deck will show you techniques and technologies necessary to take a large, transaction SQL Server database and migrate it to Azure, Azure SQL Database, and Azure SQL Database Managed Instance
Sandeep Raj Kosuri has over 7 years of experience as a SQL Server database administrator. He has a Bachelor's degree in Electrical and Electronics Engineering and is proficient in SQL Server, Microsoft Office, and various operating systems. He currently works as a SQL Server DBA at InfoGrid Technologies, where his responsibilities include database administration, performance monitoring and optimization, backup and recovery, and security management. Previously he has worked as a SQL Server DBA for two other companies, where he performed tasks such as database maintenance, upgrades, replication, and disaster recovery planning.
Rohit Panot has over 9 years of experience as an Oracle Database Administrator with expertise in installation, configuration, backup/recovery, performance tuning, and health monitoring of Oracle databases on Windows, Linux, AIX, and Solaris environments. He has also worked with SQL Server and MySQL databases. Key projects include providing 24/7 production support for multiple clients and leading a team supporting databases over 3TB in size.
A Complete BI Solution in About an Hour!Aaron King
In this presentation Aaron will cover how to collect data from multiple sources using SQL Server 2012 Integration Services (SSIS). Then he will use SQL Server Reporting Services (SSRS) to report detail on that data. After that he will use SQL Server Analysis Services (SSAS) to create a KPI. Finally he’ll present that KPI on a dashboard via a web page. The goal of this presentation is to show how seamless the Microsoft Business Intelligence products are. If you’ve only used a few of these products, you’ll appreciate seeing them together all at once. Code will be provided.
Recent advances in Postgres have propelled the database forward to meet today’s data challenges. At some of the world’s largest companies, Postgres plays a major role in controlling costs and reducing dependence on traditional providers.
This presentation addresses:
* What workloads are best suited for introducing Postgres into your environment
* The success milestones for evaluating the ‘when and how’ of expanding Postgres deployments
* Key advances in recent Postgres releases that support new data types and evolving data challenges
This presentation is intended for strategic IT and Business Decision-Makers involved in data infrastructure decisions and cost-savings.
ECMDay2015 - Kent Agerlund – Configuration Manager 2012 – A Site ReviewKenny Buntinx
Ever experienced sluggish ConfigMgr administrator console performance or collections taking forever to refresh? Join Kent Agerlund as he will walk you thru a ConfigMgr site review and reveal why so many ConfigMgr installations don’t perform as they should. This sessions will be packed with tip and tricks, SQL secrets and PowerShell scripts that will optimize your environment and bring ConfigMgr into the state it was supposed to be from the beginning
Tobiasz Janusz Koprowski presents information on disaster preparedness and recovery best practices. The document discusses the importance of having backups, recovery procedures, clearly defined roles and responsibilities, service level agreements, and contact information in case of an outage. Specific recommendations include regularly testing restores, storing backup files offsite, having accurate documentation, and ensuring key personnel are prepared to respond to disasters and outages.
This document provides an overview of an online SQL DBA training course. The training contains 6 modules that cover topics such as SQL Server architecture, installation, configuration, security, backup/recovery, high availability, and clustering. Specific topics include installing and upgrading SQL Server, performance tuning, indexing, replication, log shipping, database mirroring, and AlwaysOn availability groups. The goal is to help students learn how to administer a SQL Server database infrastructure.
Kaplan Shares Key Learnings and Best Practices in Optimizing Database Adminis...Datavail
Hear directly from Kaplan’s Vice President of Business Systems and Architecture, who will discuss how he partnered with Datavail to augment his team using robust technical discovery, SQL Server Health Checks and assessments, knowledge transfer, runbook documentation, weekly meetings, status updates, and continuous service support and improvement. Hear their challenges and experience real-world examples on how to tackle them.
This document provides an overview of an online training for SQL Server 2012. It outlines 6 modules that will be covered including installation, configuration, security, backup/restore, high availability, and clustering. The training will cover topics such as installing and upgrading SQL Server, configuring services, automating tasks, monitoring performance, backup strategies, log shipping, mirroring, replication, and SQL Server clustering.
Should I move my database to the cloud?James Serra
So you have been running on-prem SQL Server for a while now. Maybe you have taken the step to move it from bare metal to a VM, and have seen some nice benefits. Ready to see a TON more benefits? If you said “YES!”, then this is the session for you as I will go over the many benefits gained by moving your on-prem SQL Server to an Azure VM (IaaS). Then I will really blow your mind by showing you even more benefits by moving to Azure SQL Database (PaaS/DBaaS). And for those of you with a large data warehouse, I also got you covered with Azure SQL Data Warehouse. Along the way I will talk about the many hybrid approaches so you can take a gradual approve to moving to the cloud. If you are interested in cost savings, additional features, ease of use, quick scaling, improved reliability and ending the days of upgrading hardware, this is the session for you!
Ripon K. Datta is a SQL Server Database Administrator with 5 years of experience. He has extensive skills in SQL Server installation, configuration, security, high availability, backup/recovery, performance tuning, and more. He holds certifications including MCITP: SQL Database Administrator and MCTS: Microsoft Certified Technology Specialist of SQL Server. Currently he works as a Database Administrator for the Department of Education, where his responsibilities include SQL Server maintenance and management.
Sql server operational best practices notes from the field - charley hanan...Charley Hanania
This document contains notes from a presentation given by Charley Hanania on SQL Server operational best practices. It includes Hanania's background and contact information, as well as an agenda for discussing topics like obfuscation, rubber stamping SQL Server installations, using projects and solutions, setting service level agreements, and monitoring. The presentation aims to provide guidelines and procedures for operating SQL Server installations that can help with issues like performance, security, and disaster recovery.
Configurando Aplicaciones para Réplicas de Lectura de SQL-Server AlwaysOn - C...SpanishPASSVC
This document announces an upcoming webinar on September 24th about configuring applications to take advantage of SQL Server AlwaysOn readable secondary replicas. It provides background on SQL Server high availability and disaster recovery technologies, an overview of AlwaysOn availability groups, and how to configure applications and use active secondary replicas for read scaling and backup operations. The webinar will include a demonstration of implementing an AlwaysOn availability group and testing readable secondaries.
Optimizing Open Source for Greater Database Savings & ControlEDB
Postgres kan een grote rol spelen in het beheersbaar maken van kosten en in het verlagen van de afhankelijkheid van traditionele database vendoren. Met Postgres is het mogelijk om DBMS kosten met 80% of meer te reduceren.
EnterpriseDB Postgres Plus Advanced Server biedt Oracle compatibiliteit met Enterprise tools en features welke gebaseerd zijn op het legendarische OSS PostgreSQL platform.
Hoogtepunten van de presentatie zijn:
- Een overzicht van het database landschap – verleden, heden en toekomst
- Hoe TCO te verlagen en Postgres te integreren in uw huidige database omgeving
- Welke workloads zijn het best geschikt om Postgres te introduceren in uw datacenter
- Kritische succesfactoren voor het succesvol uitbreiden van Postgres implementaties
- De laatste ontwikkelingen in de recente Postgres releases welke nieuwe data types en uitdagingen ondersteunen
Doelgroep: Deze presentatie is bedoeld voor strategische IT-en zakelijke beslissers welke betrokken zijn bij IT infrastructuur en applicatie ontwikkeling. U bent op zoek naar kostenbesparing met een veilige, betrouwbare en bewezen database.
SharePoint Databases: What you need to know (201504)Alan Eardley
This document discusses SharePoint databases and provides information on:
- The speaker's background and areas of expertise in SharePoint and SQL Server.
- An overview of what will be covered, including how SharePoint uses SQL databases.
- Details on the different types of databases needed for SharePoint including content, service application, and administrative databases.
- Best practices for planning database needs including sizing, growth, and high availability options.
- How a DBA can help with configuration, monitoring, backup, and other database maintenance tasks for SharePoint.
Similar to SQL Server Upgrade and Consolidation - Methodology and Approach (20)
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
4. Workshop Outline
• Today’s Challenges
• What Is Consolidation?
• Consolidation Approach
• The Benefits of Consolidation
• Time to upgrade???
• Why Upgrade?
• Upgrade Methodology
• Upgrade & Consolidation Tools
• Q&A
5. Today’s Challenges
• Financial Resources
• Major economic reset AKA a recession
• Organizations are cutting costs in response
• Managing Complicated Infrastructure
• Too many servers, too few DBAs
• Management tools not always effective
• Unknown servers contribute to license issues
• Keeping software current presents challenges
• Security risks posed by non-homogenous environment
6. What Is Consolidation?
Consolidated Server
Database
Server
Database
Server
Database
Server
• Consolidation is the process of methodically decreasing
the number of database servers to reduce the size and
complexity of the data infrastructure.
7. What Is Consolidation?
Consolidated Server
Redundant
Application
Redundant
Application
Redundant
Application
• Consolidation can also include reducing the number of
duplicate applications.
8. Consolidation Approach
• Identification
• Existing servers are identified and then classified by either
internal or vendor application.
• Classification
• The process continues to break down and classify each
database further using a set of criteria.
• Organization
• The conclusion of the process yields a set of databases that are
organized into those that can be consolidated and those that
cannot.
9. The Benefits of Consolidation
• Reduced hardware costs by removing under-utilized server
resources from production.
• Avoid ever-increasing storage costs by leveraging compression and
other features of SQL Server 2016
• Improved data security and auditing capabilities
• Better manageability for the data infrastructure
• Improved overall performance of existing database resources
• Reduced equipment environmental requirements such as cooling
and AC
• Improved business efficiency through a better managed, more
agile data infrastructure
10. …journey so far: SQL Server 2008 R2 to 2016
Support for ‘R’
Query StoreStretch Databases
JSON Support
Level Always Encrypted
11. Time to upgrade???
• How can you perform upgrade (pro-actively)?
• What tools can help collect data for analysis?
• What kind of upgrade strategy you would follow on
various SQL instances?
• How can you detect troubled instances/databases?
12. The List…
• Why Upgrade?
• Building plans & strategies…
• Upgrade Route….
• Best Practices……
• Round-up
13. Why Upgrade?
• End of mainstream support
• SQL Server 2000
• SQL Server 2005
• SQL Server 2008 & R2
• SQL Server 2012
• Hardware upgrade
• Consolidation
• ….and
14. Mainstream and Extended support
Version Mainstream Extended
SQL Server 2000 SP4 08-04-2008 09-04-2013
SQL Server 2005 SP4 12-04-2011 12-04-2016
SQL Server 2008 SP4 08-04-2014 09-07-2019
SQL Server 2008 R2 SP3 08-07-2014 09-07-2019
SQL Server 2012 SP4 11-07-2017 12-07-2022
SQL Server 2014 SP2 09-07-2019 09-07-2024
SQL Server 2016 SP1 31-07-2021 14-07-2026
https://support.microsoft.com/en-us/lifecycle
15. Why Upgrade?
• New features
• AlwaysOn Availability Groups
• Windows Server Core Support
• Columnstore Indexes
• User-Defined Server Roles
• Enhanced Auditing Features
• BI Semantic Model
• Sequence Objects
• Enhanced PowerShell Support
• Distributed Replay
• PowerView
• SQL Azure Enhancements
• Big Data Support
•Improved In-memory engine
•Enhanced Windows 2012 Integration
•Enhanced AlwaysOn Availability
groups
•Backup Enhancements
•Updatable Columnstore Indexes
•SSDT for BI
•Power BI for Office 365 integration
•Always Encrypted
•Stretch Database
•Real-time Operational Analytics
•PolyBase into SQL Server
•Native JSON support
•Always-On enhancements
•Enhanced In-memory OLTP
•Revamped SSDT
2012 2014 2016
17. Planning
• Preparing to Upgrade
• Review upgrade documentation and resources
• Document your resources and environment
• Identify upgrade requirements
• Decide on upgrade strategy
• Upgrade High-Availability servers
• Establish backup and rollback plans
• Test the plan!!!
18. Pre-Upgrade
• Check environment
• Run Data Migration Assistant (2012, 2014, 2016 & Azure SQL)
• Ensure environment is clean
• Check database consistency
• Consider shrink Data file (read-only DB) and log files
• Rebuild indexes
• Run SQL Server Best Practices Analyzer (BPA)
• Back up your environment
• System and user databases including DTS/SSIS packages
• …what else
• Documentation
19. Prepare to Post-upgrade
• The Upgrade
• Document every step
• System health checks
• Perform the upgrade - strategy
• Environment backup (pre to post)
• Go/No-go (Checkpoint)
• Review the logs
• Troubleshoot - upgrade failure
• Test functionality and performance.
• Determine application acceptance
21. Side-by-Side (Migration) Upgrade
• Install new instance of SQL Server without affecting existing instance
• Can be same or different server
• Database objects are manually copied to new instance
• Copy Database Wizard/Detach -> Copy -> Attach/Backup -> Restore
Pros & Cons.
22. In-Place Upgrade
• Upgrades an existing installation
• Instance name remains the same after upgrade
• Old instance no longer exists
• User data and configuration is preserved
• Mostly automated process through SQL Server Setup
• Performed on same machine as existing installation
Pros & Cons.
23. Upgrade & Consolidation Tools
• MAP (Microsoft Assessment and Planning) Toolkit
for SQL Server
• https://www.microsoft.com/en-us/download/details.aspx?id=7826
• DMA (Data Migration Assistant)
• https://www.microsoft.com/en-us/download/details.aspx?id=53595
• Best Practices Analyzer for SQL Server
• https://www.microsoft.com/en-in/download/details.aspx?id=29302
• SQL Server 2016 Setup: System Configuration
Checker
• Custom scripts
Upgrade strategies - planning, options, methodology and tools
Upgrade scenarios – Clustering, Mirroring and so on
Lessons learned and recommended practices
Consolidation results in lower operating costs and a greater return on infrastructure investment.
If you are comfortable with current solution – stay with it
AlwaysOn – use secondaries for reporting, backups, loading datawarehouse and other activities