When you are faced with making changes to your database environment such as upgrades, patches, hardware installation or virtual machine implementations, there are inherent risks that database performance or availability can suffer.
Oracle ACE and author Bert Scalzo will show how to reduce these risks and demonstrate how Benchmark Factory can help you scale your database for best possible results.
Securing MongoDB to Serve an AWS-Based, Multi-Tenant, Security-Fanatic SaaS A...MongoDB
MongoDB introduces new capabilities that change the way micro-services interact with the database, capabilities that are either absent or exist only partially in high-end commercial databases such as Oracle. In this session I will share from my experiences building a cloud-based, multi-tenant SaaS application with extreme security requirements. We will cover topics including considerations for storing multi-tenant data in the database, best practices for authentication and authorization, and performance considerations specific to security in MongoDB.
Take advantage of ScyllaDB’s wide column NoSQL features such as workload prioritization to balance the needs of OLTP and OLAP in the same cluster. Plus learn about the different compaction strategies and which one would be right for your workload. With additional insights on properly sizing your database and using open source tools for observability.
AWS meets Android - "AWS SDK for Android"で開発を楽にしよう!SORACOM, INC
Android Bazaar and Conference 2011 Winterで話した資料に、簡単に解説を加えたものです。以下は、講演の概要。
Amazon Web Services(AWS)は、仮想サーバ、ストレージ、データベースなどを初期費用不要かつ安価な従量課金で提供するクラウドサービスのパイオニアです。AWS SDK for Androidは、モバイルアプリケーションのためのライブラリであり、開発者がモバイルアプリからAWSのサービスを使うことを非常に簡単にします。容量無制限でオブジェクトを置いただけでURLがつくAmazon Simple Storage Services(S3)を使えば、画像やビデオを高速にCDN(コンテンツ配信)できるアプリケーションを容易に作成できます。その他にも、非常に柔軟性、スケーラビリティが高いNoSQLサービスのAmazon SimpleDBや、高信頼でスケーラブルなキューイング、メッセージングを提供するAmazon SQS、Amazon SNSなど便利なサービスが揃っています。本セッションでは、アマゾンクラウドの概要、AWS SDK for Androidの解説だけでなく、デモンストレーションによりアマゾンクラウドとAWS SDK for Androidの魅力を余すところなくお伝えいたします。
Cloud-Native Apache Spark Scheduling with YuniKorn SchedulerDatabricks
Kubernetes is the most popular container orchestration system that is natively designed for Cloud. At Lyft and Cloudera, we have both emerged the next-generation, cloud-native infrastructure based on Kubernetes, which supports various distributed workloads.
Video of the presentation can be seen here: https://www.youtube.com/watch?v=uxuLRiNoDio
The Data Source API in Spark is a convenient feature that enables developers to write libraries to connect to data stored in various sources with Spark. Equipped with the Data Source API, users can load/save data from/to different data formats and systems with minimal setup and configuration. In this talk, we introduce the Data Source API and the unified load/save functions built on top of it. Then, we show examples to demonstrate how to build a data source library.
Securing MongoDB to Serve an AWS-Based, Multi-Tenant, Security-Fanatic SaaS A...MongoDB
MongoDB introduces new capabilities that change the way micro-services interact with the database, capabilities that are either absent or exist only partially in high-end commercial databases such as Oracle. In this session I will share from my experiences building a cloud-based, multi-tenant SaaS application with extreme security requirements. We will cover topics including considerations for storing multi-tenant data in the database, best practices for authentication and authorization, and performance considerations specific to security in MongoDB.
Take advantage of ScyllaDB’s wide column NoSQL features such as workload prioritization to balance the needs of OLTP and OLAP in the same cluster. Plus learn about the different compaction strategies and which one would be right for your workload. With additional insights on properly sizing your database and using open source tools for observability.
AWS meets Android - "AWS SDK for Android"で開発を楽にしよう!SORACOM, INC
Android Bazaar and Conference 2011 Winterで話した資料に、簡単に解説を加えたものです。以下は、講演の概要。
Amazon Web Services(AWS)は、仮想サーバ、ストレージ、データベースなどを初期費用不要かつ安価な従量課金で提供するクラウドサービスのパイオニアです。AWS SDK for Androidは、モバイルアプリケーションのためのライブラリであり、開発者がモバイルアプリからAWSのサービスを使うことを非常に簡単にします。容量無制限でオブジェクトを置いただけでURLがつくAmazon Simple Storage Services(S3)を使えば、画像やビデオを高速にCDN(コンテンツ配信)できるアプリケーションを容易に作成できます。その他にも、非常に柔軟性、スケーラビリティが高いNoSQLサービスのAmazon SimpleDBや、高信頼でスケーラブルなキューイング、メッセージングを提供するAmazon SQS、Amazon SNSなど便利なサービスが揃っています。本セッションでは、アマゾンクラウドの概要、AWS SDK for Androidの解説だけでなく、デモンストレーションによりアマゾンクラウドとAWS SDK for Androidの魅力を余すところなくお伝えいたします。
Cloud-Native Apache Spark Scheduling with YuniKorn SchedulerDatabricks
Kubernetes is the most popular container orchestration system that is natively designed for Cloud. At Lyft and Cloudera, we have both emerged the next-generation, cloud-native infrastructure based on Kubernetes, which supports various distributed workloads.
Video of the presentation can be seen here: https://www.youtube.com/watch?v=uxuLRiNoDio
The Data Source API in Spark is a convenient feature that enables developers to write libraries to connect to data stored in various sources with Spark. Equipped with the Data Source API, users can load/save data from/to different data formats and systems with minimal setup and configuration. In this talk, we introduce the Data Source API and the unified load/save functions built on top of it. Then, we show examples to demonstrate how to build a data source library.
One of the most important things you can do to improve the performance of your flash/SSDs with Aerospike is to properly prepare them. This Presentation goes through how to select, test, and prepare the drives so that you will get the best performance and lifetime out of them.
Spark started at Facebook as an experiment when the project was still in its early phases. Spark's appeal stemmed from its ease of use and an integrated environment to run SQL, MLlib, and custom applications. At that time the system was used by a handful of people to process small amounts of data. However, we've come a long way since then. Currently, Spark is one of the primary SQL engines at Facebook in addition to being the primary system for writing custom batch applications. This talk will cover the story of how we optimized, tuned and scaled Apache Spark at Facebook to run on 10s of thousands of machines, processing 100s of petabytes of data, and used by 1000s of data scientists, engineers and product analysts every day. In this talk, we'll focus on three areas: * *Scaling Compute*: How Facebook runs Spark efficiently and reliably on tens of thousands of heterogenous machines in disaggregated (shared-storage) clusters. * *Optimizing Core Engine*: How we continuously tune, optimize and add features to the core engine in order to maximize the useful work done per second. * *Scaling Users:* How we make Spark easy to use, and faster to debug to seamlessly onboard new users.
Speakers: Ankit Agarwal, Sameer Agarwal
Apache Spark At Apple with Sam Maclennan and Vishwanath LakkundiDatabricks
At Apple we rely on processing large datasets to power key components of Apple’s largest production services. Spark is continuing to replace and augment traditional MR workloads with its speed and low barrier to entry. Our current analytics infrastructure consists of over an exabyte of storage and close to a million cores. Our footprint is also growing further with the addition of new elastic services for streaming, adhoc and interactive analytics.
In this talk we will cover the challenges of working at scale with tricks and lessons learned managing large multi-tenant clusters. We will also discuss designing and building a self-service elastic analytics platform on Mesos.
Database Performance at Scale Masterclass: Database Internals by Pavel Emelya...ScyllaDB
Pavel Emelyanov, Principal Engineer at ScyllaDB
Botond Denes, C++ Developer at ScyllaDB
What performance-minded engineers need to know.
Hear from Pavel Emelyanov and Botond Dénes on the impact of database internals – specifically, what to look for if you need latency and/or throughput improvements.
Sparklens: Understanding the Scalability Limits of Spark Applications with R...Databricks
One of the common requests we receive from customers (at Qubole) is debugging slow spark application. Usually this process is done with trial and error, which takes time and requires running clusters beyond normal usage (read wasted resources). Moreover, it doesn’t tell us where to looks for further improvements. We at Qubole are looking into making this process more self-serve. Towards this goal we have built a tool (OSS https://github.com/qubole/sparklens) based on spark event listener framework.
From a single run of the application, Sparklens provides insights about scalability limits of given spark application. In this talk we will cover what Sparklens does and theory behind Sparklens. We will talk about how structure of spark application puts important constraints on its scalability. How can we find these structural constraints and how to use these constraints as a guide in solving performance and scalability problems of spark applications.
This talk will help audience in answering the following questions about their spark applications: 1) Will their application run faster with more executors? 2) How will cluster utilization change as number of executors change? 3) What is the absolute minimum time this application will take even if we give it infinite executors? 4) What is the expected wall clock time for the application when we fix the most important structural limits of these application? Sparklens makes the ROI of additional executor extremely obvious for a given application and needs just a single run of the application to determine how application with behave with different executor counts. Specifically, it will help managers take the correct side of the tradeoff between spending developer time optimising applications vs spending money on compute bills.
Bucket your partitions wisely - Cassandra summit 2016Markus Höfer
When we talk about bucketing we essentially talk about possibilities to split cassandra partitions in several smaller parts, rather than having only one large partition.
Bucketing of cassandra partitions can be crucial for optimizing queries, preventing large partitions or to fight TombstoneOverwhelmingException which can occur when creating too many tombstones.
In this talk I want to show how to recognize large partitions during datamodeling. I will also show different strategies we used in our projects to create, use and maintain buckets for our partitions.
Where is my bottleneck? Performance troubleshooting in FlinkFlink Forward
Flinkn Forward San Francisco 2022.
In this talk, we will cover various topics around performance issues that can arise when running a Flink job and how to troubleshoot them. We’ll start with the basics, like understanding what the job is doing and what backpressure is. Next, we will see how to identify bottlenecks and which tools or metrics can be helpful in the process. Finally, we will also discuss potential performance issues during the checkpointing or recovery process, as well as and some tips and Flink features that can speed up checkpointing and recovery times.
by
Piotr Nowojski
Run Apache Spark on Kubernetes in Large Scale_ Challenges and Solutions-2.pdfAnya Bida
Speaker: Bo Yang
Summary: More and more people are running Apache Spark on Kubernetes due to the popularity of Kubernetes. There are a lot of challenges since Spark was not originally designed for Kubernetes, for example, easily submitting/managing applications, accessing Spark UI, allocating resource queues based on cpu/memory, and etc. This talk will present how to address these challenges and provide Spark As Service in a large scale.
Magnet Shuffle Service: Push-based Shuffle at LinkedInDatabricks
The number of daily Apache Spark applications at LinkedIn has increased by 3X in the past year. The shuffle process alone, which is one of the most costly operators in batch computation, is processing PBs of data and billions of blocks daily in our clusters. With such a rapid increase of Apache Spark workloads, we quickly realized that the shuffle process can become a severe bottleneck for both infrastructure scalability and workloads efficiency. In our production clusters, we have observed both reliability issues due to shuffle fetch connection failures and efficiency issues due to the random reads of small shuffle blocks on HDDs.
To tackle those challenges and optimize shuffle performance in Apache Spark, we have developed Magnet shuffle service, a push-based shuffle mechanism that works natively with Apache Spark. Our paper on Magnet has been accepted by VLDB 2020. In this talk, we will introduce how push-based shuffle can drastically increase shuffle efficiency when compared with the existing pull-based shuffle. In addition, by combining push-based shuffle and pull-based shuffle, we show how Magnet shuffle service helps to harden shuffle infrastructure at LinkedIn scale by both reducing shuffle related failures and removing scaling bottlenecks. Furthermore, we will share our experiences of productionizing Magnet at LinkedIn to process close to 10 PB of daily shuffle data.
Speaker: Eric Spencer, IBM Software Engineer, iNotes Development
Learn how you can customize IBM iNotes and SmartCloud Notes web to adapt your corporate look and feel, modify the available functional areas, and add new capabilities. See the improvements made in recent releases, which allow for easier customization and greater tolerance during the upgrade process. I’ll step through examples, such as modifying the items on the action bar. With some HTML and JavaScript skills you can easily extend your IBM iNotes or SmartCloud Notes web mail client to make it your own!
This is a presentation given during our studies at the Moore School of Business of the University of South Carolina on hydrogen fuel cell technologies.
How Dashtable Helps Dragonfly Maintain Low LatencyScyllaDB
Dashtable is a hashtable implementation inside Dragonfly. It supports incremental resizes and fast, cache-friendly operations. In this talk, we will learn how Dashtable helps Dragonfly to keep its tail latency in check. In Dashtable, long-tail latencies have been reduced by a factor of 1000x, but P999 are 7x longer. Find out why we still think this is a good tradeoff.
NVMe storage systems and NVMe networks promise to reduce latency further and increase performance beyond what SAS based flash systems and current networking technology can deliver. To take advantage of that performance gain however, the data center must have workloads that can take advantage of all the latency reduction and performance improvements that NVMe offers. Vendors emphatically state that NVMe is the next must-have technology, yet many still continue to provide SAS based arrays using traditional networks.
How do IT planners know then, that investing in NVMe will truly provide their organizations the benefits of NVMe for their demanding applications and see a measurable return on investment? Just creating a test environment to perform an NVMe evaluation can break the IT budget!
Register now to join Storage Switzerland, Virtual Instruments, and SANBlaze as we look at the state of the data center and provide IT planners with the information they need to decide if NVMe is an investment they should make now or if they should wait a year or more. The key is determining which applications can benefit from NVMe-based approaches.
In this event, IT professionals will learn
- About NVMe, NVMe Storage Systems and NVMe over Fabric Networking
- The Performance Potential of NVMe Storage and Networks
- What attributes are needed for a workload to take advantage of NVMe
- Why NVMe creates problems for current IT testing strategies
- Why a Workload Simulation approach is the only practical way to test NVMe
- How to build a storage performance validation practice
One of the most important things you can do to improve the performance of your flash/SSDs with Aerospike is to properly prepare them. This Presentation goes through how to select, test, and prepare the drives so that you will get the best performance and lifetime out of them.
Spark started at Facebook as an experiment when the project was still in its early phases. Spark's appeal stemmed from its ease of use and an integrated environment to run SQL, MLlib, and custom applications. At that time the system was used by a handful of people to process small amounts of data. However, we've come a long way since then. Currently, Spark is one of the primary SQL engines at Facebook in addition to being the primary system for writing custom batch applications. This talk will cover the story of how we optimized, tuned and scaled Apache Spark at Facebook to run on 10s of thousands of machines, processing 100s of petabytes of data, and used by 1000s of data scientists, engineers and product analysts every day. In this talk, we'll focus on three areas: * *Scaling Compute*: How Facebook runs Spark efficiently and reliably on tens of thousands of heterogenous machines in disaggregated (shared-storage) clusters. * *Optimizing Core Engine*: How we continuously tune, optimize and add features to the core engine in order to maximize the useful work done per second. * *Scaling Users:* How we make Spark easy to use, and faster to debug to seamlessly onboard new users.
Speakers: Ankit Agarwal, Sameer Agarwal
Apache Spark At Apple with Sam Maclennan and Vishwanath LakkundiDatabricks
At Apple we rely on processing large datasets to power key components of Apple’s largest production services. Spark is continuing to replace and augment traditional MR workloads with its speed and low barrier to entry. Our current analytics infrastructure consists of over an exabyte of storage and close to a million cores. Our footprint is also growing further with the addition of new elastic services for streaming, adhoc and interactive analytics.
In this talk we will cover the challenges of working at scale with tricks and lessons learned managing large multi-tenant clusters. We will also discuss designing and building a self-service elastic analytics platform on Mesos.
Database Performance at Scale Masterclass: Database Internals by Pavel Emelya...ScyllaDB
Pavel Emelyanov, Principal Engineer at ScyllaDB
Botond Denes, C++ Developer at ScyllaDB
What performance-minded engineers need to know.
Hear from Pavel Emelyanov and Botond Dénes on the impact of database internals – specifically, what to look for if you need latency and/or throughput improvements.
Sparklens: Understanding the Scalability Limits of Spark Applications with R...Databricks
One of the common requests we receive from customers (at Qubole) is debugging slow spark application. Usually this process is done with trial and error, which takes time and requires running clusters beyond normal usage (read wasted resources). Moreover, it doesn’t tell us where to looks for further improvements. We at Qubole are looking into making this process more self-serve. Towards this goal we have built a tool (OSS https://github.com/qubole/sparklens) based on spark event listener framework.
From a single run of the application, Sparklens provides insights about scalability limits of given spark application. In this talk we will cover what Sparklens does and theory behind Sparklens. We will talk about how structure of spark application puts important constraints on its scalability. How can we find these structural constraints and how to use these constraints as a guide in solving performance and scalability problems of spark applications.
This talk will help audience in answering the following questions about their spark applications: 1) Will their application run faster with more executors? 2) How will cluster utilization change as number of executors change? 3) What is the absolute minimum time this application will take even if we give it infinite executors? 4) What is the expected wall clock time for the application when we fix the most important structural limits of these application? Sparklens makes the ROI of additional executor extremely obvious for a given application and needs just a single run of the application to determine how application with behave with different executor counts. Specifically, it will help managers take the correct side of the tradeoff between spending developer time optimising applications vs spending money on compute bills.
Bucket your partitions wisely - Cassandra summit 2016Markus Höfer
When we talk about bucketing we essentially talk about possibilities to split cassandra partitions in several smaller parts, rather than having only one large partition.
Bucketing of cassandra partitions can be crucial for optimizing queries, preventing large partitions or to fight TombstoneOverwhelmingException which can occur when creating too many tombstones.
In this talk I want to show how to recognize large partitions during datamodeling. I will also show different strategies we used in our projects to create, use and maintain buckets for our partitions.
Where is my bottleneck? Performance troubleshooting in FlinkFlink Forward
Flinkn Forward San Francisco 2022.
In this talk, we will cover various topics around performance issues that can arise when running a Flink job and how to troubleshoot them. We’ll start with the basics, like understanding what the job is doing and what backpressure is. Next, we will see how to identify bottlenecks and which tools or metrics can be helpful in the process. Finally, we will also discuss potential performance issues during the checkpointing or recovery process, as well as and some tips and Flink features that can speed up checkpointing and recovery times.
by
Piotr Nowojski
Run Apache Spark on Kubernetes in Large Scale_ Challenges and Solutions-2.pdfAnya Bida
Speaker: Bo Yang
Summary: More and more people are running Apache Spark on Kubernetes due to the popularity of Kubernetes. There are a lot of challenges since Spark was not originally designed for Kubernetes, for example, easily submitting/managing applications, accessing Spark UI, allocating resource queues based on cpu/memory, and etc. This talk will present how to address these challenges and provide Spark As Service in a large scale.
Magnet Shuffle Service: Push-based Shuffle at LinkedInDatabricks
The number of daily Apache Spark applications at LinkedIn has increased by 3X in the past year. The shuffle process alone, which is one of the most costly operators in batch computation, is processing PBs of data and billions of blocks daily in our clusters. With such a rapid increase of Apache Spark workloads, we quickly realized that the shuffle process can become a severe bottleneck for both infrastructure scalability and workloads efficiency. In our production clusters, we have observed both reliability issues due to shuffle fetch connection failures and efficiency issues due to the random reads of small shuffle blocks on HDDs.
To tackle those challenges and optimize shuffle performance in Apache Spark, we have developed Magnet shuffle service, a push-based shuffle mechanism that works natively with Apache Spark. Our paper on Magnet has been accepted by VLDB 2020. In this talk, we will introduce how push-based shuffle can drastically increase shuffle efficiency when compared with the existing pull-based shuffle. In addition, by combining push-based shuffle and pull-based shuffle, we show how Magnet shuffle service helps to harden shuffle infrastructure at LinkedIn scale by both reducing shuffle related failures and removing scaling bottlenecks. Furthermore, we will share our experiences of productionizing Magnet at LinkedIn to process close to 10 PB of daily shuffle data.
Speaker: Eric Spencer, IBM Software Engineer, iNotes Development
Learn how you can customize IBM iNotes and SmartCloud Notes web to adapt your corporate look and feel, modify the available functional areas, and add new capabilities. See the improvements made in recent releases, which allow for easier customization and greater tolerance during the upgrade process. I’ll step through examples, such as modifying the items on the action bar. With some HTML and JavaScript skills you can easily extend your IBM iNotes or SmartCloud Notes web mail client to make it your own!
This is a presentation given during our studies at the Moore School of Business of the University of South Carolina on hydrogen fuel cell technologies.
How Dashtable Helps Dragonfly Maintain Low LatencyScyllaDB
Dashtable is a hashtable implementation inside Dragonfly. It supports incremental resizes and fast, cache-friendly operations. In this talk, we will learn how Dashtable helps Dragonfly to keep its tail latency in check. In Dashtable, long-tail latencies have been reduced by a factor of 1000x, but P999 are 7x longer. Find out why we still think this is a good tradeoff.
NVMe storage systems and NVMe networks promise to reduce latency further and increase performance beyond what SAS based flash systems and current networking technology can deliver. To take advantage of that performance gain however, the data center must have workloads that can take advantage of all the latency reduction and performance improvements that NVMe offers. Vendors emphatically state that NVMe is the next must-have technology, yet many still continue to provide SAS based arrays using traditional networks.
How do IT planners know then, that investing in NVMe will truly provide their organizations the benefits of NVMe for their demanding applications and see a measurable return on investment? Just creating a test environment to perform an NVMe evaluation can break the IT budget!
Register now to join Storage Switzerland, Virtual Instruments, and SANBlaze as we look at the state of the data center and provide IT planners with the information they need to decide if NVMe is an investment they should make now or if they should wait a year or more. The key is determining which applications can benefit from NVMe-based approaches.
In this event, IT professionals will learn
- About NVMe, NVMe Storage Systems and NVMe over Fabric Networking
- The Performance Potential of NVMe Storage and Networks
- What attributes are needed for a workload to take advantage of NVMe
- Why NVMe creates problems for current IT testing strategies
- Why a Workload Simulation approach is the only practical way to test NVMe
- How to build a storage performance validation practice
Adding Value in the Cloud with Performance TestRodolfo Kohn
System quality attributes such performance, scalability, and availability are among the main concerns for cloud application developers and product managers. There are many examples of notable system failures that show how a company business can be affected during key events like a Cyber Monday. However, many difficulties come up when a team intends to consciously manage these type of quality attributes during development and operations. It is possible to group these difficulties in two main aspects: human aspects and technical aspects. During this presentation, I will share main technical difficulties we had to deal with in the last seven years working with different cloud services as well as key technical performance, scalability, and availability issues we were able to find and solve. It is about cases that are relevant through different products, technologies, and teams.
Learn how to improve the performance of your Cognos environment. We cover hardware and server specifics, architecture setup, dispatcher tuning, report specific tuning including the Interactive Performance Assistant and more. See the recording and download this deck: https://senturus.com/resources/cognos-analytics-performance-tuning/
Senturus offers a full spectrum of services for business analytics. Our Knowledge Center has hundreds of free live and recorded webinars, blog posts, demos and unbiased product reviews available on our website at: https://senturus.com/resources/
Idera live 2021: Managing Databases in the Cloud - the First Step, a Succes...IDERA Software
You need to start moving some on-premises databases to the cloud.
- Where do you begin?
- What are your options?
- What will your job look like afterward?
-What tools can you use to manage databases in the cloud?
- How does troubleshooting database performance problems in the cloud differ from on-premise?
- How can you help manage monthly cloud costs so the effort actually is cost effective?
Moving to the cloud is not as easy as one might think. So, knowing the answers to these kinds of question will place your feet on the path to success. See how DB PowerStudio can readily assist with these efforts and questions.
The presenter, Bert Scalzo, is an Oracle ACE, blogger, author, speaker and database technology consultant. He has worked with all major relational databases, including Oracle, SQL Server, Db2, Sybase, MySQL, and PostgreSQL. Bert’s work experience includes stints as product manager for multiple-database tools, such as DBArtisan and Aqua Data Studio at IDERA. He has three decades of Oracle database experience and previously worked for both Oracle Education and Oracle Consulting. Bert holds several Oracle Masters certifications and his academic credentials include a BS, MS, and PhD in computer science, as well as an MBA.
Upgrade to Dell EMC PowerEdge R6515 servers and gain better OLTP and VDI perf...Principled Technologies
Additionally, PowerEdge R6515 servers with 3rd Gen AMD EPYC processors could lower licensing costs and also empower your business to explore Kubernetes with VMware Tanzu
Recover 30% of your day with IBM Development Tools (Smarter Mainframe Develop...Susan Yoskin
If you need to attract new developers, and want to keep your company’s name out of the headlines, then this session is for you. When your business depends on your mainframe apps working and performing well—all the time—you need to be alerted to issues as they occur and have the tools to help you find and fix the problems and test your solutions before disaster strikes (we’ve all been in those late night and weekend drills). You also need to continue supporting these applications for years to come, and that will require new talent.
This session will introduce you to the development environments that college grads are already comfortable with, and help your applications become more resilient at the same time. We’ll walk you through the tools to help you accomplish all of this and demo some scenarios to show you how efficiently our tools can perform the tasks that slow you down.
Windows Server 2003 EOS : l'opportunité de repenser votre IT et mettre en pla...Microsoft Décideurs IT
Session Dell: Chacun ses raisons, chacun ses moyens, chacun sa migration. Alors que l’échéance du 14 juillet 2015 approche à grands pas pour les utilisateurs de Windows Server 2003, différents scénarios sont possibles pour passer sans encombre à un nouvel environnement. Qu’il s’agisse de répondre à des contraintes de compliance, à une fin de garantie ou encore à un enjeu de sécurité, ces projets de migration doivent en effet être abordé de façon précise car ils recèlent de nombreuses opportunités pour votre organisation : • Consolider son infrastructure informatique avec la virtualisation, • Initier ou poursuivre sa transformation vers le cloud, • Optimiser et moderniser ses applications métiers. Pour faire les bons choix, les équipes de Dell, qui ont accompagné plus de 500 entreprises dans leur migration depuis Windows XP l’année dernière, sont prêtes à vous apporter conseils et expertise dans ces nouveaux défis, et partager avec vous les premiers retours d'expérience.
Windows Server 2003 EOS : l'opportunité de repenser votre IT et mettre en pla...Microsoft Décideurs IT
Session Dell: Chacun ses raisons, chacun ses moyens, chacun sa migration. Alors que l’échéance du 14 juillet 2015 approche à grands pas pour les utilisateurs de Windows Server 2003, différents scénarios sont possibles pour passer sans encombre à un nouvel environnement. Qu’il s’agisse de répondre à des contraintes de compliance, à une fin de garantie ou encore à un enjeu de sécurité, ces projets de migration doivent en effet être abordé de façon précise car ils recèlent de nombreuses opportunités pour votre organisation : • Consolider son infrastructure informatique avec la virtualisation, • Initier ou poursuivre sa transformation vers le cloud, • Optimiser et moderniser ses applications métiers. Pour faire les bons choix, les équipes de Dell, qui ont accompagné plus de 500 entreprises dans leur migration depuis Windows XP l’année dernière, sont prêtes à vous apporter conseils et expertise dans ces nouveaux défis, et partager avec vous les premiers retours d'expérience.
Windows Server 2003 EOS : l'opportunité de repenser votre IT et mettre en pla...Microsoft Technet France
Session Dell: Chacun ses raisons, chacun ses moyens, chacun sa migration. Alors que l’échéance du 14 juillet 2015 approche à grands pas pour les utilisateurs de Windows Server 2003, différents scénarios sont possibles pour passer sans encombre à un nouvel environnement. Qu’il s’agisse de répondre à des contraintes de compliance, à une fin de garantie ou encore à un enjeu de sécurité, ces projets de migration doivent en effet être abordé de façon précise car ils recèlent de nombreuses opportunités pour votre organisation : • Consolider son infrastructure informatique avec la virtualisation, • Initier ou poursuivre sa transformation vers le cloud, • Optimiser et moderniser ses applications métiers. Pour faire les bons choix, les équipes de Dell, qui ont accompagné plus de 500 entreprises dans leur migration depuis Windows XP l’année dernière, sont prêtes à vous apporter conseils et expertise dans ces nouveaux défis, et partager avec vous les premiers retours d'expérience.
Similar to Introduction to Database Benchmarking with Benchmark Factory (20)
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Accelerate your Kubernetes clusters with Varnish Caching
Introduction to Database Benchmarking with Benchmark Factory
1. 1 Global Marketing
Bert Scalzo
PhD, MBA & Oracle ACE
Database Expert & Product Architect
Avoiding the Risks of
Database Change
using Benchmark
Factory
3. 3 Quest Software is now a part of Dell
• Database Benchmarking Basics
• Benchmark Factory Overview
• Benchmarking Options
• Customizing Benchmarking Projects
• Running a Benchmark
• Examining Runtime Reports
• Error Messages and Log Files
• Benchmark Factory Demonstration
• Questions & Answers
Purpose / Agenda
4. 4 Quest Software is now a part of Dell
Database Benchmarking is complicated
Even with a Benchmarking Tool
There is no
substitute for
your
knowledge and
expertise
5. 5 Quest Software is now a part of Dell
Successful Database Benchmarking
Expert – must know database, O/S,
hardware, software, tuning & have
read the benchmark specsStress test the database
?
6. 6 Quest Software is now a part of Dell
Database
Benchmarking
Most
Common
Mistakes
7. 7 Quest Software is now a part of Dell
Wrong Person
Doing the
Heavy Lifting
8. 8 Quest Software is now a part of Dell
Trying to
make do
without the
right tools
9. 9 Quest Software is now a part of Dell
Wrong number
of virtual users
Don’t cut
corners
10. 10 Quest Software is now a part of Dell
Industry Standard Database Benchmarks
• Industry Standard Database Benchmarks have been around for over 20 years
• TPC.ORG
– TPC-C: Older OLTP benchmark. Basic “order entry” type application.
– TPC-H: Data warehousing queries. 22 queries – not star schema design.
– TPC-E: Newer OLTP benchmark. Simulates workload of brokerage firms.
– TPC-DS: New, evolving complete DW. Spec still evolving – not yet in BMF.
– TPC-VMS: New, evolving virtual DB’s. Spec still evolving – not yet in BMF.
• Be Sure to Read the Specs
– Understand basic database design
– Implementation options up to you
– Sizing options for data load and concurrency
– How to interpret results and/or gauge success
• Majority of failures due to insufficient preparation, unrealistic expectations,
and uncertain metrics for success
11. 11 Quest Software is now a part of Dell
Examples of Info in the Spec: TPC-C
BMF Prompts
for DB size –
you must know
how to answer
100 concurrent
users = scale
factor of 10
12. 12 Quest Software is now a part of Dell
Examples of Info in the Spec: TPC-H
BMF does not
know what
Oracle options
you are licensed
for (partitioning),
nor your
hardware setup
such as spindles
per LUN, RAID
level, CPU
count, RAM size,
and son on.
BMF creates
basic & correct
tables and
indexes – you
must manually
optimize them
within spec.
13. 13 Quest Software is now a part of Dell
Database Design – Reference Full Disclosure Reports for
Similar Setups
14. 14 Quest Software is now a part of Dell
Database Design
Reference Full Disclosure Reports for Similar Setups
15. 15 Quest Software is now a part of Dell
Database Design – Appendix B Will Show DDL for Optimal
Design
Very common technique
to load very large data sets
into ETL or staging tables,
and then to do create
table as select (CTAS) to
populate the benchmark
tables
16. 16 Quest Software is now a part of Dell
Common Database Benchmark Metrics
TPS and Avg. Response Time
• Transactions per Second
– Gets far too much attention
– Meaningless to most users
– Sort of like automobile RPM’s (how fast internal engine is working – not how fast car
is moving or how soon we’ll arrive)
– Misconception that TPS equates to IOPS (IO Operations per Second) – ignores
database memory caching and logging
– BMF does not report IOPS (on roadmap for possibly 7.0)
• Average Response Time
– Gets far too less attention
– Meaningful to most users
– Sort of like MPH (or KPH) (how fast car actually is moving – so infers how soon we’ll
arrive or amount of fuel we’ll use)
– When examined in conjunction with TPS, then a generally observable and clear
pattern often emerges (next slide) …
• Be sure to read my blog: The Myth of Transactions per Second (TPS)
17. 17 Quest Software is now a part of Dell
Common Database Benchmark Pattern – True Point of
Saturation
0
1000
2000
3000
4000
5000
6000
7000
100 200 300 400 500 600 700 800 900 1000
TPS Avg Resp Time
Looking for inflection
point where TPS is still
increasing or just
decreasing and close to
max where average
response time is below
customer defined SLA
Notice the line
characteristics between
roughly 750 and 850
concurrent users – for
current configuration
and optimization
benchmark results
interesting
Common mistake to simply attempt maximize TPS – remember TPS is not IOPS
18. 18 Quest Software is now a part of Dell
Benchmark Factory Architecture – Console (Control
Center) and Agents
You must manually install BMF
agent software on servers –
otherwise BMF will simply spawn
agents on your console machine
(major bottleneck)
Each 32-bit BMF agent can handle
500 concurrent user sessions
Each 64-bit BMF agent can handle
1,000 concurrent user sessions
Each server running one or more
agents requires its own separate
BMF license!
Agent servers can be virtual
machines
19. 19 Quest Software is now a part of Dell
Hardware Sizing is Very Difficult
Requires Lots of Thought
Let’s assume doing TPC-C
benchmark for sale factor 100,
so 1,000 concurrent users, and
roughly 8-10 BMF agents
Note we already have a huge
built-in bottleneck – there’s
only one NIC for 1,000
concurrent database sessions!
Let’s assume using virtual
machines for entire BMF
architecture on a seemingly
large VMware ESX / vSphere
server
20. 20 Quest Software is now a part of Dell
Benchmark Factory Architecture
BMF Offers Two Repository Options
DEFAULT
21. 21 Quest Software is now a part of Dell
BMF Offers Many Types of Tests
Industry Standard Benchmarks
22. 22 Quest Software is now a part of Dell
BMF Offers Many Types of Tests
Scalability & Workload Capture/Replay
23. 23 Quest Software is now a part of Dell
BMF Offers Many Types of Tests
Custom Load Scenario
24. 24 Quest Software is now a part of Dell
BMF Offers Many Types of Tests
Goal Testing to Meet Customer SLA
BMF will test from 100
to 1,000 concurrent
users in increments of
100 additional users,
and will stop when
the average response
time is more than 2
seconds (e.g.
common customer
defined SLA).
25. 25 Quest Software is now a part of Dell
Example of Benchmark Factory Project – Default Creation
26. 26 Quest Software is now a part of Dell
Example of Benchmark Factory Project
Custom (Highly Recommended)
27. 27 Quest Software is now a part of Dell
Example of Benchmark Factory Project
Custom (Highly Recommended)
28. 28 Quest Software is now a part of Dell
Example of
Benchmark
Factory
Running
29. 29 Quest Software is now a part of Dell
Benchmark
Factory
Runtime
Reports –
Results from
Runs
30. 30 Quest Software is now a part of Dell
Benchmark Factory Error Logging – Where to Look
31. 31 Quest Software is now a part of Dell
Benchmark Factory Error Logging – Where to Look
32. 32 Quest Software is now a part of Dell
On-Demand
Webcast -
Intro to
Database
Benchmarking
and product
demo
• Download the webcast now
33. 33 Quest Software is now a part of Dell
• www.toadworld.com
– Expert Blogs
– Articles
– How-To Videos
– User Forum
– Wiki
– Direct access to
product experts and
much more
Additional Resources
34. 34 Quest Software is now a part of Dell
Additional
Resources
• Benchmark Factory Home Page
• Download a free 30 day trial edition of Benchmark Factory
• White Paper – Top 10 Benchmarking Misconceptions
• On-Demand Webcast - Introduction to Database
Benchmarking