In this presentation we will describe the techniques and tools to analyze SQL Server workloads and we will introduce baselining and benchmarking techniques
Build Your Custom Performance Testing FrameworkTechWell
Performance testing requires knowledge of systems architecture, techniques to simulate the load equivalent of sometimes millions of transactions per day, and tools to monitor/report runtime statistics. With the evolution from desktop to web and now the cloud, performance testing involves an unparalleled combination of different workloads and technologies. There is no one tool available—either commercial or open source—that meets all performance testing needs. Some tools act as load generators; others only monitor system resources; and many only operate for specific applications or environments. Prashant Suri shares the essential components you need for a comprehensive performance test framework and explores why each component is required for a holistic test. Learn how to develop your custom framework―starting with parsing test scripts in a predefined format, iterating over test data, employing distributed load generators, and integrating test monitors into the framework. Discover how building your own framework gives you flexibility to challenge multiple performance problems—and save thousands of dollars along the way.
Immutable infrastructure isn’t the answerSam Bashton
Immutable infrastructure wasn't suitable for the consultancy's needs as it led to long deployment times and a lack of visibility into instance configurations. Instead, they developed a hybrid approach using Packer, Puppet, S3, and AWS services that provides faster deployments, self-healing infrastructure, and a known, verifiable state for instances. This allows them to focus on application development rather than infrastructure management.
Database Build and Release - SQL In The City - Ernest HwangRed Gate Software
Ernest Hwang discusses automating database maintenance tasks using Red Gate SQL Source Control, continuous integration with Jenkins, and unit testing with tSQLt. He demonstrates committing database changes to source control, how continuous integration runs builds and tests on each commit, and deploying changes between environments with SQL Compare. Automating these processes saves developers time by eliminating manual scripting and deployments while improving quality by catching errors early in continuous integration builds and unit tests.
Sencha Roadshow 2017: Best Practices for Implementing Continuous Web App TestingSencha
Learn how to create end-to-end functional tests quickly across multiple browsers simultaneously and scale the automated test suite to over thousands of test cases and cross-browser combinations for a complete regression cycle. We will demonstrate how we are able to locate a component, generate test code, and execute tests from TeamCity.
Build, Test and Extend Integrated Workflows 3.7StephenKardian
This document discusses building, testing, and extending integrated workflows in Neuron ESB. It covers containers, flow control, exception management, external assemblies, message processing, arguments and variables, custom workflows and activities, building workflows, and debugging workflows. The goals are to understand complex workflow logic, custom activities, flow control, exception handling, external code integration, message transformation, and testing workflows.
stackconf 2020 | Real Continuous Deployment of JVM applications by Nicolas Fr...NETWAYS
A couple of years ago, continuous integration in the JVM ecosystem meant Jenkins. Since that time, a lot of other tools have been made available. But new tools don’t mean new features, just new ways. Beside that, what about continuous deployment? There’s no tool that allows to deploy new versions of a JVM-based application without downtime. The only way to achieve zero downtime is to have multiple nodes deployed on a platform, and let that platform achieve that _e.g._ Kubernetes.
And yet, achieving true continuous deployment of bytecode on one single JVM instance is possible if one changes one’s way of looking at things. What if compilation could be seen as changes? What if those changes could be stored in a data store, and a listener on this data store could stream those changes to the running production JVM via the Attach API?
In that talk, I’ll demo exactly that using Hazelcast and Hazelcast Jet – but it’s possible to re-use the principles that will be shown using other streaming technologies.
Run tests at scale with on-demand Selenium Grid using AWS FargateMegha Mehta
Blog: https://meghamehta.tech/2020/07/13/run-selenium-tests-in-containers-using-aws-ecs-fargate/
Youtube link of the talk: https://www.youtube.com/watch?v=npSlm1YUp-Q
This presentation is from my talk at DSTC 2019, showcasing a cloud-native container solution to run 1000s of tests in an on-demand, scalable Selenium Grid.
Azure DevOps added the Multi-Stage Pipelines as a part of the Pipeline offering which enables version controlled Ci/CD expressed as YAML.
These slides were based on the information available in Aug-2019 on how a pipeline can be constructed.
Build Your Custom Performance Testing FrameworkTechWell
Performance testing requires knowledge of systems architecture, techniques to simulate the load equivalent of sometimes millions of transactions per day, and tools to monitor/report runtime statistics. With the evolution from desktop to web and now the cloud, performance testing involves an unparalleled combination of different workloads and technologies. There is no one tool available—either commercial or open source—that meets all performance testing needs. Some tools act as load generators; others only monitor system resources; and many only operate for specific applications or environments. Prashant Suri shares the essential components you need for a comprehensive performance test framework and explores why each component is required for a holistic test. Learn how to develop your custom framework―starting with parsing test scripts in a predefined format, iterating over test data, employing distributed load generators, and integrating test monitors into the framework. Discover how building your own framework gives you flexibility to challenge multiple performance problems—and save thousands of dollars along the way.
Immutable infrastructure isn’t the answerSam Bashton
Immutable infrastructure wasn't suitable for the consultancy's needs as it led to long deployment times and a lack of visibility into instance configurations. Instead, they developed a hybrid approach using Packer, Puppet, S3, and AWS services that provides faster deployments, self-healing infrastructure, and a known, verifiable state for instances. This allows them to focus on application development rather than infrastructure management.
Database Build and Release - SQL In The City - Ernest HwangRed Gate Software
Ernest Hwang discusses automating database maintenance tasks using Red Gate SQL Source Control, continuous integration with Jenkins, and unit testing with tSQLt. He demonstrates committing database changes to source control, how continuous integration runs builds and tests on each commit, and deploying changes between environments with SQL Compare. Automating these processes saves developers time by eliminating manual scripting and deployments while improving quality by catching errors early in continuous integration builds and unit tests.
Sencha Roadshow 2017: Best Practices for Implementing Continuous Web App TestingSencha
Learn how to create end-to-end functional tests quickly across multiple browsers simultaneously and scale the automated test suite to over thousands of test cases and cross-browser combinations for a complete regression cycle. We will demonstrate how we are able to locate a component, generate test code, and execute tests from TeamCity.
Build, Test and Extend Integrated Workflows 3.7StephenKardian
This document discusses building, testing, and extending integrated workflows in Neuron ESB. It covers containers, flow control, exception management, external assemblies, message processing, arguments and variables, custom workflows and activities, building workflows, and debugging workflows. The goals are to understand complex workflow logic, custom activities, flow control, exception handling, external code integration, message transformation, and testing workflows.
stackconf 2020 | Real Continuous Deployment of JVM applications by Nicolas Fr...NETWAYS
A couple of years ago, continuous integration in the JVM ecosystem meant Jenkins. Since that time, a lot of other tools have been made available. But new tools don’t mean new features, just new ways. Beside that, what about continuous deployment? There’s no tool that allows to deploy new versions of a JVM-based application without downtime. The only way to achieve zero downtime is to have multiple nodes deployed on a platform, and let that platform achieve that _e.g._ Kubernetes.
And yet, achieving true continuous deployment of bytecode on one single JVM instance is possible if one changes one’s way of looking at things. What if compilation could be seen as changes? What if those changes could be stored in a data store, and a listener on this data store could stream those changes to the running production JVM via the Attach API?
In that talk, I’ll demo exactly that using Hazelcast and Hazelcast Jet – but it’s possible to re-use the principles that will be shown using other streaming technologies.
Run tests at scale with on-demand Selenium Grid using AWS FargateMegha Mehta
Blog: https://meghamehta.tech/2020/07/13/run-selenium-tests-in-containers-using-aws-ecs-fargate/
Youtube link of the talk: https://www.youtube.com/watch?v=npSlm1YUp-Q
This presentation is from my talk at DSTC 2019, showcasing a cloud-native container solution to run 1000s of tests in an on-demand, scalable Selenium Grid.
Azure DevOps added the Multi-Stage Pipelines as a part of the Pipeline offering which enables version controlled Ci/CD expressed as YAML.
These slides were based on the information available in Aug-2019 on how a pipeline can be constructed.
Lucee writing your own debugging templateGert Franz
In this session you learned how to write your own debugging template for Lucee. Also you can find the template which makes it easy to navigate through the myriads of data provided by the debugging interface in Lucee.
The debugging template can be downloaded from here: http://www.rasia.ch/downloads/debugging-template.zip
Lucee writing your own debugging templateGert Franz
This presentation introduces you into the usage of custom Debugging templates in Lucee and how to write your own as well as populate them with own data. The talk is targeted towards lucee, but still also applicable to ACF.
In this session we will have a look at the different Caching options in Lucee and introduce a new tool called ArgusCache, which will allow you to tune your applications, WITHOUT touching the source code.
Sascha Möllering discusses infrastructure as code and provides an overview of VMware SDKs, Chef, and using Chef to configure JBoss middleware. He explains that VMware has multiple SDKs and that the VI Java SDK simplifies development. Chef is introduced as a tool to automate and standardize server configurations. The presentation then covers using Chef recipes to deploy and configure JBoss application servers and integrating with JBoss Operations Network for monitoring.
This document summarizes provisioning methods and best practices in Ivanti. It discusses:
1. PXE and vBoot provisioning methods for remote, on-site, and lab environments.
2. Common problems like incorrect preferred server credentials, broken core communication, or outdated PXE repositories. Logs and troubleshooting steps are also provided.
3. Template settings like "Stop processing template if action fails" and how they affect PXE vs vBoot workflows.
4. Enhancements in Ivanti 2016.3 like PXE client self-election, real-time variable lookup, and identifying machine attributes.
Free and useful tools have proliferated since the launch of the CodePlex and SourceForge websites. Join Kevin Kline, long-time author of the SQL Server Magazine column "Tool Time", as he profiles the very best of the free tools covered in his monthly column - dozens of free tools and utilities! Some of the cover tools help to:
- Track database growth
- Implement logging in SSIS job steps
- Stress test your database applications
- Automate important preventative maintenance tasks
- Automate maintenance tasks for Analysis Services
- Help protect against SQL Injection attacks
- Graphically manage Extended Events
- Utilize PowerShell scripts to ease administration
And much more. These tools are all free and independently supported by SQL Server enthusiasts around the world.
This document discusses common mistakes made with SQL Server and how to avoid them. It covers topics like backups, consistency checks, log cleanup, statistics maintenance, index maintenance, memory settings, parallelism settings, TempDB configuration, alerts, and power settings. The author is Tim Radney, a SQL Server MVP, who provides recommendations and scripts for ensuring databases are properly maintained and optimized.
This document discusses managing performance tests through a pipeline in Jenkins. It identifies issues with current approaches like the need for programming and infrastructure knowledge. It proposes automating the pipeline in Jenkins to dynamically create and delete test client servers, run tests in parallel using Gatling, stash and collect logs, generate reports from logs, and publish results. Sample pipeline code is provided. The goal is to improve the process by making it less manual and complex.
Using Redgate, AKS and Azure to bring DevOps to your DatabaseRed Gate Software
Join Hamish Watson and Rob Sewell to learn practical solutions on how to bring DevOps to your database, including:
• The importance of getting your database code into source control
• How to test your database changes
• Tools you can use to automate build and test processes
• How to build an automated deployment process for your database with Redgate tools
• How to embrace using Azure Kubernetes Services (AKS) in your deployment pipeline
• Deploying your entire pipeline as and when it is needed from Dev to Prod saving your organisation money
Database upgrades and data in general are often the most complicated part of your deployment process, so having a robust deployment path and checks before getting to production is very important.
The demos will showcase practical solutions that can help you and your team bring DevOps to your database using SQL Source Control, infrastructure as code, docker containers and SQL Change Automation – all leading up to a fully automated test and deployment process.
This will be a fun-filled fast paced hour and you will learn some new skills which will bring immediate benefit to your organization.
ENASE 2013 - SEM - (Francia) From Functional Test Scripts to Performance Test...Federico Toledo
When modernizing systems the software is migrated from one platform to another. There are big risks concerning the performance the system should have in the new platform. A new system cannot take more time to perform the same operations than the previous one as the users will refuse it. Therefore, the preventive performance test is crucial to guarantee the success of the modernization project. However, the automation tasks for performance testing are too demanding, in terms of time and effort, as the tools work at a communication protocol level. Though not free, the functional testing automation is easier to accomplish than the performance testing automation as the tools work at a graphic user interface level; the tools are therefore more intuitive and they have to handle less variables and technical issues. In this article we present a tool that we developed for industrial usage to automatically generate performance tests scripts from automated functional tests. The tool has been used in several projects in the industry, achieving important effort savings and improving flexibility.
This document discusses a company's migration from Puppet to Ansible for infrastructure automation. It provides an overview of the company's cloud platform and automation needs. It then compares Puppet and Ansible, noting Ansible's advantages for the company's use case such as its lower learning curve and more human-readable scripts. The document outlines the process used to develop Ansible scripts and notes improvements in results and future plans to enhance the automation roadmap.
Working Well Together: How to Keep High-end Game Development Teams ProductivePerforce
This document discusses Guerrilla Games' development process for their game Killzone Shadow Fall. It describes how they use Perforce for version control and continuous integration. Key aspects of their process include having most developers work on the main trunk, extensive testing before and after code commits, labeling builds to sync safely to known good versions, and branching for frequent weekly releases while keeping the main branch for ongoing development. This approach allows for speed of iteration while maintaining control over the development cycle and releases.
This document introduces performance testing. It discusses what performance testing is, why it is needed, and when to perform tests. It also outlines the performance testing process and different types of performance tests, including capacity/volume testing, spike testing, stress testing, and load testing. The goal of performance testing is to determine a software program's speed, effectiveness, responsiveness, reliability, throughput, and scalability under different workloads.
This document discusses common patterns used in test automation frameworks, including page object, business layer, singleton, composition, and factory patterns. It describes the page object pattern and limitations like test intent becoming imperative. The business layer page object pattern addresses these by validating business requirements. Test data patterns are also discussed, with criteria like data being complex, unique, and potentially dynamic. External files, properties, and databases are examples of specifying test data. Locator patterns include specifying locators in page objects or separate files. Overall, patterns aid in communication, reduce complexity, and help tests be of production quality and easier to implement, maintain, and scale. The best pattern depends on the specific context.
This document discusses performance testing for service oriented architectures (SOA) and enterprise middleware. It covers typical server performance patterns like latency and throughput. It recommends establishing a clustered test environment that mirrors production. Open source tools for performance testing SOA are discussed, including Apache JMeter for load testing and soapUI for functional tests. The document also covers monitoring tools for analyzing performance.
WSO2 Test Automation Framework : Approach and AdoptionWSO2
The document discusses WSO2's test automation framework. It provides an overview of the framework's requirements, history, approach, architecture, and structure. The framework is designed to make automation easy, organized, relevant and optimized. It supports integration, platform, and cloud-based deployment testing across multiple WSO2 products. Key aspects include flexible execution modes, a context provider, server management, the automation API, utilities, and reporting. Challenges and future areas of focus are also presented, along with demonstrations of sample tests.
1) Ansible is being used at Backbase to automate the provisioning of different server configurations for testing their Customer Experience Platform (CXP).
2) A REST API and UI allow users to easily provision new environments from available server stacks configured with Ansible for testing.
3) This enables Backbase to implement continuous delivery practices like automated testing of new versions without affecting production environments.
Migrare la tua applicazione verso il cloud è estremamente semplice, sulla carta. La dura verità è che l'unico modo per sapere con certezza come si comporterà è testare con attenzione. Estrarre un benchmark on premises è già abbastanza difficile, ma il benchmarking nel cloud può diventare davvero complicato a causa delle restrizioni negli ambienti PaaS e per la mancanza di strumenti.
Raggiungimi in questa sessione e scopri come catturare un carico di lavoro da produzione, riprodurlo nel tuo database cloud e confrontare le prestazioni. Ti illustrerò la metodologia e gli strumenti per portare il tuo database nel cloud senza battere ciglio.
By Gianluca Sartori
App Dyanmics & SOASTA Testing & Monitoring Coverage, March 2012SOASTA
Dan Bartow, VP Performance Engineering, SOASTA and Steve Burton, Technology Evangelist, AppDynamics discuss the convergence of two traditionally separate domains and also demo testing and troubleshooting with CloudTest & AppDynamics.
Lucee writing your own debugging templateGert Franz
In this session you learned how to write your own debugging template for Lucee. Also you can find the template which makes it easy to navigate through the myriads of data provided by the debugging interface in Lucee.
The debugging template can be downloaded from here: http://www.rasia.ch/downloads/debugging-template.zip
Lucee writing your own debugging templateGert Franz
This presentation introduces you into the usage of custom Debugging templates in Lucee and how to write your own as well as populate them with own data. The talk is targeted towards lucee, but still also applicable to ACF.
In this session we will have a look at the different Caching options in Lucee and introduce a new tool called ArgusCache, which will allow you to tune your applications, WITHOUT touching the source code.
Sascha Möllering discusses infrastructure as code and provides an overview of VMware SDKs, Chef, and using Chef to configure JBoss middleware. He explains that VMware has multiple SDKs and that the VI Java SDK simplifies development. Chef is introduced as a tool to automate and standardize server configurations. The presentation then covers using Chef recipes to deploy and configure JBoss application servers and integrating with JBoss Operations Network for monitoring.
This document summarizes provisioning methods and best practices in Ivanti. It discusses:
1. PXE and vBoot provisioning methods for remote, on-site, and lab environments.
2. Common problems like incorrect preferred server credentials, broken core communication, or outdated PXE repositories. Logs and troubleshooting steps are also provided.
3. Template settings like "Stop processing template if action fails" and how they affect PXE vs vBoot workflows.
4. Enhancements in Ivanti 2016.3 like PXE client self-election, real-time variable lookup, and identifying machine attributes.
Free and useful tools have proliferated since the launch of the CodePlex and SourceForge websites. Join Kevin Kline, long-time author of the SQL Server Magazine column "Tool Time", as he profiles the very best of the free tools covered in his monthly column - dozens of free tools and utilities! Some of the cover tools help to:
- Track database growth
- Implement logging in SSIS job steps
- Stress test your database applications
- Automate important preventative maintenance tasks
- Automate maintenance tasks for Analysis Services
- Help protect against SQL Injection attacks
- Graphically manage Extended Events
- Utilize PowerShell scripts to ease administration
And much more. These tools are all free and independently supported by SQL Server enthusiasts around the world.
This document discusses common mistakes made with SQL Server and how to avoid them. It covers topics like backups, consistency checks, log cleanup, statistics maintenance, index maintenance, memory settings, parallelism settings, TempDB configuration, alerts, and power settings. The author is Tim Radney, a SQL Server MVP, who provides recommendations and scripts for ensuring databases are properly maintained and optimized.
This document discusses managing performance tests through a pipeline in Jenkins. It identifies issues with current approaches like the need for programming and infrastructure knowledge. It proposes automating the pipeline in Jenkins to dynamically create and delete test client servers, run tests in parallel using Gatling, stash and collect logs, generate reports from logs, and publish results. Sample pipeline code is provided. The goal is to improve the process by making it less manual and complex.
Using Redgate, AKS and Azure to bring DevOps to your DatabaseRed Gate Software
Join Hamish Watson and Rob Sewell to learn practical solutions on how to bring DevOps to your database, including:
• The importance of getting your database code into source control
• How to test your database changes
• Tools you can use to automate build and test processes
• How to build an automated deployment process for your database with Redgate tools
• How to embrace using Azure Kubernetes Services (AKS) in your deployment pipeline
• Deploying your entire pipeline as and when it is needed from Dev to Prod saving your organisation money
Database upgrades and data in general are often the most complicated part of your deployment process, so having a robust deployment path and checks before getting to production is very important.
The demos will showcase practical solutions that can help you and your team bring DevOps to your database using SQL Source Control, infrastructure as code, docker containers and SQL Change Automation – all leading up to a fully automated test and deployment process.
This will be a fun-filled fast paced hour and you will learn some new skills which will bring immediate benefit to your organization.
ENASE 2013 - SEM - (Francia) From Functional Test Scripts to Performance Test...Federico Toledo
When modernizing systems the software is migrated from one platform to another. There are big risks concerning the performance the system should have in the new platform. A new system cannot take more time to perform the same operations than the previous one as the users will refuse it. Therefore, the preventive performance test is crucial to guarantee the success of the modernization project. However, the automation tasks for performance testing are too demanding, in terms of time and effort, as the tools work at a communication protocol level. Though not free, the functional testing automation is easier to accomplish than the performance testing automation as the tools work at a graphic user interface level; the tools are therefore more intuitive and they have to handle less variables and technical issues. In this article we present a tool that we developed for industrial usage to automatically generate performance tests scripts from automated functional tests. The tool has been used in several projects in the industry, achieving important effort savings and improving flexibility.
This document discusses a company's migration from Puppet to Ansible for infrastructure automation. It provides an overview of the company's cloud platform and automation needs. It then compares Puppet and Ansible, noting Ansible's advantages for the company's use case such as its lower learning curve and more human-readable scripts. The document outlines the process used to develop Ansible scripts and notes improvements in results and future plans to enhance the automation roadmap.
Working Well Together: How to Keep High-end Game Development Teams ProductivePerforce
This document discusses Guerrilla Games' development process for their game Killzone Shadow Fall. It describes how they use Perforce for version control and continuous integration. Key aspects of their process include having most developers work on the main trunk, extensive testing before and after code commits, labeling builds to sync safely to known good versions, and branching for frequent weekly releases while keeping the main branch for ongoing development. This approach allows for speed of iteration while maintaining control over the development cycle and releases.
This document introduces performance testing. It discusses what performance testing is, why it is needed, and when to perform tests. It also outlines the performance testing process and different types of performance tests, including capacity/volume testing, spike testing, stress testing, and load testing. The goal of performance testing is to determine a software program's speed, effectiveness, responsiveness, reliability, throughput, and scalability under different workloads.
This document discusses common patterns used in test automation frameworks, including page object, business layer, singleton, composition, and factory patterns. It describes the page object pattern and limitations like test intent becoming imperative. The business layer page object pattern addresses these by validating business requirements. Test data patterns are also discussed, with criteria like data being complex, unique, and potentially dynamic. External files, properties, and databases are examples of specifying test data. Locator patterns include specifying locators in page objects or separate files. Overall, patterns aid in communication, reduce complexity, and help tests be of production quality and easier to implement, maintain, and scale. The best pattern depends on the specific context.
This document discusses performance testing for service oriented architectures (SOA) and enterprise middleware. It covers typical server performance patterns like latency and throughput. It recommends establishing a clustered test environment that mirrors production. Open source tools for performance testing SOA are discussed, including Apache JMeter for load testing and soapUI for functional tests. The document also covers monitoring tools for analyzing performance.
WSO2 Test Automation Framework : Approach and AdoptionWSO2
The document discusses WSO2's test automation framework. It provides an overview of the framework's requirements, history, approach, architecture, and structure. The framework is designed to make automation easy, organized, relevant and optimized. It supports integration, platform, and cloud-based deployment testing across multiple WSO2 products. Key aspects include flexible execution modes, a context provider, server management, the automation API, utilities, and reporting. Challenges and future areas of focus are also presented, along with demonstrations of sample tests.
1) Ansible is being used at Backbase to automate the provisioning of different server configurations for testing their Customer Experience Platform (CXP).
2) A REST API and UI allow users to easily provision new environments from available server stacks configured with Ansible for testing.
3) This enables Backbase to implement continuous delivery practices like automated testing of new versions without affecting production environments.
Migrare la tua applicazione verso il cloud è estremamente semplice, sulla carta. La dura verità è che l'unico modo per sapere con certezza come si comporterà è testare con attenzione. Estrarre un benchmark on premises è già abbastanza difficile, ma il benchmarking nel cloud può diventare davvero complicato a causa delle restrizioni negli ambienti PaaS e per la mancanza di strumenti.
Raggiungimi in questa sessione e scopri come catturare un carico di lavoro da produzione, riprodurlo nel tuo database cloud e confrontare le prestazioni. Ti illustrerò la metodologia e gli strumenti per portare il tuo database nel cloud senza battere ciglio.
By Gianluca Sartori
App Dyanmics & SOASTA Testing & Monitoring Coverage, March 2012SOASTA
Dan Bartow, VP Performance Engineering, SOASTA and Steve Burton, Technology Evangelist, AppDynamics discuss the convergence of two traditionally separate domains and also demo testing and troubleshooting with CloudTest & AppDynamics.
The document discusses several new features in Oracle Database 11g for management enhancements including:
1) Change capture and replay capabilities to setup test environments and perform online application upgrades.
2) Snapshot standbys for test environments that allow testing and discarding of writes without impacting the primary database.
3) Database replay to capture and replay workloads in pre-and post-change systems to analyze for errors or performance issues.
4) Several new capabilities for online patching, upgrades, and automatic diagnostic workflows.
Visual Studio 2005 Database Professional EditionDavid Truxall
This document summarizes Visual Studio Team System for database professionals. It discusses how VSTS allows database schemas and changes to be managed through source control. It enables automated database testing through unit testing and generation of test data. Database professionals can now be fully integrated into the development lifecycle through features such as work item tracking, schema comparison, and refactoring of database objects.
Sql Server Tuning for SharePoint : what every consultant must know (Office 36...serge luca
This document discusses tuning SQL Server for SharePoint. It covers operating system settings like NTFS allocation units and IOPS requirements. SQL Server configuration topics include using a dedicated SQL instance, database recovery models, tempdb settings, and file placement. Database concepts explained include SharePoint database types and SQL Server integration. SQL Server optimization techniques like resource governor and performance counters are also outlined. High availability and disaster recovery options like Always On Availability Groups are then covered.
This document provides an overview of SQL Server 2014 In-Memory OLTP. It discusses the benefits of In-Memory OLTP such as reducing contention, reducing logging, and minimizing execution time. It covers pre-implementation requirements, limitations, memory optimized structures like tables and indexes, and the table and stored procedure creation processes. It also discusses hardware trends enabling In-Memory OLTP, demoes basic functionality, and outlines approaches for migrating workloads to In-Memory OLTP.
This presentation discusses continuous database deployments. It begins with an introduction of the presenter and an overview of topics to be covered. It then contrasts manual database change management with continuous deployment. The main methods covered are schema-based, using the database schema in source control; script-based, using change scripts; and code-based, coding database changes. Benefits include reduced errors and faster releases. Best practices discussed include backing up data and deploying breaking changes in steps. The presentation concludes with a call for questions.
Tuning SQL Server for Sharepoint-Sharepoint Summit Toronto 2014serge luca
Tuning SQL Server for SharePoint 2013-Sharepoint Summit toronto 2014 - Serge Luca (SharePoint MVP) and Isabelle Van Campenhoudt(SQ Server MVP); ShareQL, Belgium
Rolta’s application testing services for handling ever changing environment. Rolta
There are many changes take place every day every minute. The changes in form of updates, upgrades, patches and many more, to handle these everyday changes and alleviate testing pressure, Rolta introduces Real Application Testing (RAT). RAT comes in real handy when Oracle’s applications like SQL performance analyzer and Database Replay are in use. Presentation also gives examples of couple of test cases.
Tuning SQL Server for Sharepoint 2013- What every sharepoint consultant need...serge luca
Tuning SQL Server for SharePoint what every SharePoint consultant needs to know - SharePoint Summit Vancouver - Serge Luca (SharePoint MVP) and Isabelle Van Campenhoudt(SQ Server MVP); ShareQL, Belgium
This document discusses Oracle's innovations for testing applications on Oracle databases. It introduces Oracle's Application Quality Management Suite which includes tools for testing application changes, infrastructure changes, and secure test data management. It focuses on Real Application Testing which allows testing database tier changes through workload capture and replay. It provides enhancements to SQL Performance Analyzer and discusses how Real Application Testing can be used to test database upgrades and other changes in a cost effective manner.
Database Provisioning in EM12c: Provision me a Database Now!Maaz Anjum
My presentation for Georgia Oracle User Group on December 12, 2013. In it, I discuss the Database Provisioning feature in Enterprise Manager 12c with an example of how I architected a solution by leveraging it.
This document provides a checklist for configuring SQL Server that includes recommendations for the Windows OS, patches, page file, antivirus, RAID, disk formatting, power options, network cards, installation directories, SQL Server services accounts, trace flags, server properties, memory settings, TempDB configuration, and database configuration. It offers guidance on topics like Windows version, patch level, page file size, antivirus exclusions, RAID level, disk formatting, power plan, network redundancy, installation directories, services accounts, trace flags, memory allocation, TempDB files, and database growth settings.
Presenter: Ernest Hwang of Practice Fusion > This presentation shows how to simplify your database deployments, ensure that no database changes are overlooked, and implement unit tests using the suite of Red Gate developer tools.
You'll see how Practice Fusion streamlines database deployments in their Integration, Testing, Staging, and Production environments. This frees developers from the burden of maintaining deployment scripts, while reducing the number of overlooked breaking changes to zero.
The demo uses a Windows Azure box as the Jenkins (Continuous Integration) server and several SQL Azure databases (representing Integration and QA environments). The entire repository is hosted on GitHub (https://github.com/CF9/Databases.RGDemo), for anyone to download.
You'll learn how to:
* Add your database to source control in under five minutes
* Create a CI Job to validate your database “build”
* Deploy database changes to your environments with a mouse click
* Set up database unit testing using tSQLt
* Avoid problems when implementing Database CI in the “real-world”
Ernest Hwang is a Principal Software Engineer at Practice Fusion in San Francisco. He uses Red Gate SQL Source Control, SQL Compare, SQL Data Compare, and SQL Test to automate Practice Fusion's Continuous Integration efforts and instrument database deployments.
This document summarizes an SQL Server 2008 training course on implementing high availability features. It discusses database snapshots that allow querying from a point-in-time version of a database. It also covers configuring database mirroring, which provides redundancy by synchronizing a principal database to a mirror. Other topics include partitioned tables for improved concurrency, using SQL Agent proxies for job security, performing online index operations for minimal locking, and setting up mirrored backups.
Continuent Tungsten - Scalable Saa S Data Managementguest2e11e8
The key needs of SaaS vendors include:
i) managing multi-tenant architectures with shared DBMS, ii) maintaining customer SLAs for uptime and performance and iii) optimized, efficient operations.
The key benefits Continuent Tungsten offers SaaS vendors are:
i) high availability and protection from data loss, ii) simple, efficient cluster management and iii) enable complex database topologies.
Tungsten offers high-availability, database cluster management and management of complex topologies for multi-tenant architectures.
Tungsten high availability and data protection features include maintaining live copies with data consistency checking and tightly coupled backup/restore integration with cluster management tools.
Tungsten cluster management allows SaaS vendors to migrate customers and perform system upgrades without downtime, thus enabling these maintenance operations during normal business hours.
Tungsten also enables complex replication topologies, including data filtering and data archiving strategies, maintaining extra data copies for data-marts, routing different customers to different DBMS copies, and providing cross-site multi-master replication.
This presentation is for those of you who are interested in moving your on-prem SQL Server databases and servers to Azure virtual machines (VM’s) in the cloud so you can take advantage of all the benefits of being in the cloud. This is commonly referred to as a “lift and shift” as part of an Infrastructure-as-a-service (IaaS) solution. I will discuss the various Azure VM sizes and options, migration strategies, storage options, high availability (HA) and disaster recovery (DR) solutions, and best practices.
SQL Server 2016 introduces new capabilities to help improve performance, security, and analytics:
- Operational analytics allows running analytics queries concurrently with OLTP workloads using the same schema. This provides minimal impact on OLTP and best performance.
- In-Memory OLTP enhancements include greater Transact-SQL coverage, improved scaling, and tooling improvements.
- The new Query Store feature acts as a "flight data recorder" for databases, enabling quick performance issue identification and resolution.
Let’s face it: Best Practices are too many to really know them all and choose which ones should be applied first. Does your telephone ring all the time? Do your users ask for that “quick report” that instead takes ages and keeps changing every time you think it’s done? Have you ever thought that in dire times avoiding Worst Practices could be a good starting point and you can leave fine tuning for a better future? If the answer is “yes”, then this session is for you: we will discover together how not to torture a SQL Server instance and we will see how to avoid making choices that in the long run could turn out to be not as smart as they looked initially.
This document provides an overview and summary of new security features in SQL Server 2016, including Always Encrypted for encrypting sensitive data at the column level, Dynamic Data Masking for masking sensitive data rather than encrypting it, and Row Level Security for fine-grained access control at the row level. Always Encrypted allows queries on encrypted data and provides application transparency. Dynamic Data Masking masks sensitive data on the result set without requiring application changes. Row Level Security uses security predicates and policies to centrally define and apply row-level access control logic within the database.
This document discusses using Extended Events in SQL Server to monitor and respond to events in near real-time. It introduces Extended Events concepts and the streaming target type that allows processing events as they occur. The presenter demonstrates streaming Extended Events in C# and PowerShell and introduces the Extended T-SQL Collector, an open source tool that saves Extended Events data to SQL Server tables and provides alerting functionality.
This document discusses SQL Server security best practices. It begins by noting that data breaches are common and costly for businesses. The presenter then covers security principles of confidentiality, integrity and availability. Various attack methods are described, demonstrating how quickly an unsecured system can be compromised. The presentation recommends implementing security policies across physical, network, host, application and database layers. Specific issues like SQL injection and authentication/authorization approaches are discussed. New SQL Server 2016 security features such as Always Encrypted and row-level security are also mentioned. Resources for further information are provided.
This document discusses SQL Server worst practices related to design, development, installation, and administration. Some key worst practices highlighted include not normalizing database schemas, using dynamic SQL with hardcoded literals, installing SQL Server with default settings, relying on autogrow for disk space management, and having no monitoring or alerting configured. The document encourages learning from other's mistakes to avoid common pitfalls, and provides resources for best practices analysis and SQL Server troubleshooting.
How many times did we have to spend countless hours looking for a T-SQL solution for the fancy requests of our users, to later discover our code doesn’t perform acceptably?
What can we do to improve the performance of our code?
Is there a methodology to follow in order to deliver better performance?
What are the mistakes to avoid?
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
2. Gianluca Sartori
• Independent SQL Server consultant
• SQL Server MVP, MCTS, MCITP, MCT
• Works with SQL Server since version 7
• DBA @ Scuderia Ferrari
• Blog: spaghettidba.com
• Twitter: @spaghettidba
3. Agenda
• Baselining & Benchmarking scenarios
• Tools
• Capturing in production
• Running the Replay
• Comparing results
6. What is a benchmark?
Describes performance in a given scenario
Evaluates whatif scenarios
HW/Software Upgrade
Virtualization…
Helps rating tuning efforts
Server-wide configuration changes
Addition / removal of indexes
9. CAPTURE REPLAY CAPTURE
PRODUCTION TEST
BENCHMARKCOMPAREBASELINE
Production vs. Test
PRODUCTION TEST
• Patching
• SW Upgrade
• Consolidation
• Virtualization
• HW Upgrade
DBA’s LAPTOP
10. Test vs. Test
CAPTURE REPLAY CAPTURE
REPLAYCAPTURE
PRODUCTION TEST
BASELINE
CHANGE
SOMETHING
BENCHMARK
COMPARE
PRODUCTION TEST
• Configuration change
• Database Tuning
DBA’s LAPTOP
12. Capture Tools
• Profiler
Non-negligible performance impact on the server
• Extended Events
Low performance overhead
Big files
Non compatible with all versions of SQL Server
• SQL Trace
«Old school»
Compatible with all versions
Compatible with more analysis/replay tools
13. Analysis Tools
• RML Utilities
ReadTrace
Reporter
• SQLNexus
Based on RML Utilities
• ClearTrace
Can analyze trace files
• PAL
Performance counters analysis with thresholds
14. Replay Tools
• Profiler
Can set breakpoints for debugging
• RML Utilities
Ostress
Stress / Replay mode
• Distributed Replay
Introduced in SQL Server 2012
Can replay a workload from multiple clients
Supports synchronization mode
16. What to capture
Database
Full database backup
All databases involved
Synchronize with the trace
Queries
SQL Trace vs. Profiler vs. Extended Events
Which events?
Which columns?
Any templates?
Performance data
Performance counters
Which counters?
PAL threshold file
19. Prepare the environment
Server
Logins
Any server object captured in the source workload
Database
Restore the database(s)
Queries
Pre-process the trace files
(Synchronize with the backup time, apply filters…)
20. Replay Tools
Profiler Ostress Distributed Replay
Multithreading YES YES YES
Debugging YES NO NO
Synchronization mode NO YES YES
Stress mode YES YES YES
Distributed mode NO NO YES
Input format Trace
Trace
RML
SQL
XEL
Trace
27. Alternatives?
Frequent issues
Database is too big for backup / restore (e.g. 10 TB)
Workload is too heavy to capture
Too much space on disk, too many .TRC files to analyze
Not enough time captured => not meaningful enough
Possible solutions
Replay events with XEvents streaming API
Avoid replay completely and run a representative workload
Query Store