This presentation shows why and how to use DataGate connection pooling with your AVR Web applications. With effective connection pooling your apps will run much faster and your IBM i will have a much reduced workload.
Do you need Ops in your new startup? If not now, then when? And...what is Ops?
Learn how to scale ruby-based distributed software infrastructure in the cloud to serve 4,000 requests per second, handle 400 updates per second, and achieve 99.97% uptime – all while building the product at the speed of light.
Unimpressed? Now try doing the above altogether without the Ops team, while growing your traffic 100x in 6 months and deploying 5-6 times a day!
It could be a dream, but luckily it's a reality that could be yours.
Oracle 12c Parallel Execution New FeaturesRandolf Geist
This document discusses new parallel execution features introduced in Oracle 12c. It begins with an introduction to key aspects of parallel execution, including the producer-consumer model and data distribution skew. The document then covers major new 12c features such as hybrid hash distribution, concurrent UNION ALL, and the 1 slave distribution method. It concludes with a question and answer section.
This document discusses different solutions for efficiently identifying fields in Salesforce objects that have not been used for a long time. Solution 1 involves downloading all data via the API and processing it locally, which is inefficient. Solution 2 uses the API to query data in batches, but has high API usage and long duration. Solution 3 refines the query between batches to optimize records retrieved. Solution 4 executes the query as anonymous Apex on the server for faster processing of more records in one roundtrip, with optimized network usage and API calls. Code examples are provided to implement Solutions 3 and 4.
This document discusses database performance core principles and how moving business logic out of the database and into application servers can negatively impact performance. The key points are:
1. Oracle databases are process-based and processes need to get CPU time quickly, stay on the CPU, and experience few involuntary context switches.
2. Moving business logic out of the database and into many small calls results in high call counts, more voluntary sleeps as processes switch contexts, and potential oversubscription of processes leading to involuntary sleeps.
3. This violates database performance principles and leads to inconsistent response times, lower throughput, and inefficient use of computing resources across database and application servers due to increased overhead of switching processes.
Keeping
Scaling asp.net websites to millions of usersoazabir
This document discusses various techniques for optimizing ASP.NET applications to scale from thousands to millions of users. It covers topics such as preventing denial of service attacks, optimizing the ASP.NET process model and pipeline, reducing the size of ASP.NET cookies on static content, improving System.net settings, optimizing queries to ASP.NET membership providers, issues with LINQ to SQL, using transaction isolation levels to prevent deadlocks, and employing a content delivery network. The overall message is that ASP.NET requires various "hacks" at the code, database, and configuration levels to scale to support millions of hits.
Microsoft Azure Web Sites Performance Analysis Lessons LearnedChris Woodill
This document summarizes the results of performance testing Azure Web Sites hosting plans. It finds that the Shared plan can scale well if usage quotas are not exceeded. Scaling out by adding multiple smaller instances, such as 3 Basic Medium instances, provides better performance and value than a single larger instance like Standard Large. Migrating the database to Azure SQL significantly improves page load times and ability to handle load compared to using a local SQL Compact database. Overall, scaling out instances and optimizing the application are more cost effective ways to improve performance than simply upgrading to larger VM sizes.
Do you need Ops in your new startup? If not now, then when? And...what is Ops?
Learn how to scale ruby-based distributed software infrastructure in the cloud to serve 4,000 requests per second, handle 400 updates per second, and achieve 99.97% uptime – all while building the product at the speed of light.
Unimpressed? Now try doing the above altogether without the Ops team, while growing your traffic 100x in 6 months and deploying 5-6 times a day!
It could be a dream, but luckily it's a reality that could be yours.
Oracle 12c Parallel Execution New FeaturesRandolf Geist
This document discusses new parallel execution features introduced in Oracle 12c. It begins with an introduction to key aspects of parallel execution, including the producer-consumer model and data distribution skew. The document then covers major new 12c features such as hybrid hash distribution, concurrent UNION ALL, and the 1 slave distribution method. It concludes with a question and answer section.
This document discusses different solutions for efficiently identifying fields in Salesforce objects that have not been used for a long time. Solution 1 involves downloading all data via the API and processing it locally, which is inefficient. Solution 2 uses the API to query data in batches, but has high API usage and long duration. Solution 3 refines the query between batches to optimize records retrieved. Solution 4 executes the query as anonymous Apex on the server for faster processing of more records in one roundtrip, with optimized network usage and API calls. Code examples are provided to implement Solutions 3 and 4.
This document discusses database performance core principles and how moving business logic out of the database and into application servers can negatively impact performance. The key points are:
1. Oracle databases are process-based and processes need to get CPU time quickly, stay on the CPU, and experience few involuntary context switches.
2. Moving business logic out of the database and into many small calls results in high call counts, more voluntary sleeps as processes switch contexts, and potential oversubscription of processes leading to involuntary sleeps.
3. This violates database performance principles and leads to inconsistent response times, lower throughput, and inefficient use of computing resources across database and application servers due to increased overhead of switching processes.
Keeping
Scaling asp.net websites to millions of usersoazabir
This document discusses various techniques for optimizing ASP.NET applications to scale from thousands to millions of users. It covers topics such as preventing denial of service attacks, optimizing the ASP.NET process model and pipeline, reducing the size of ASP.NET cookies on static content, improving System.net settings, optimizing queries to ASP.NET membership providers, issues with LINQ to SQL, using transaction isolation levels to prevent deadlocks, and employing a content delivery network. The overall message is that ASP.NET requires various "hacks" at the code, database, and configuration levels to scale to support millions of hits.
Microsoft Azure Web Sites Performance Analysis Lessons LearnedChris Woodill
This document summarizes the results of performance testing Azure Web Sites hosting plans. It finds that the Shared plan can scale well if usage quotas are not exceeded. Scaling out by adding multiple smaller instances, such as 3 Basic Medium instances, provides better performance and value than a single larger instance like Standard Large. Migrating the database to Azure SQL significantly improves page load times and ability to handle load compared to using a local SQL Compact database. Overall, scaling out instances and optimizing the application are more cost effective ways to improve performance than simply upgrading to larger VM sizes.
James D Bloom is a mobile web expert who focuses on high performance, reliability, wide device support, and keeping things simple. In his talk, he discusses why performance is important for mobile websites and provides strategies to improve network performance through reducing requests and bytes, increasing bandwidth efficiency, and reducing latency. He also discusses ways to improve software performance through more parallelism, faster page rendering, and faster page interaction.
From Obvious to Ingenius: Incrementally Scaling Web Apps on PostgreSQLKonstantin Gredeskoul
In this exciting and informative talk, presented at PgConf Sillicon Valley 2015, Konstantin cut through the theory to deliver a clear set of practical solutions for scaling applications atop PostgreSQL, eventually supporting millions of active users, tens of thousands concurrently, and with the application stack that responds to requests with a 100ms average. He will share how his team solved one of the biggest challenges they faced: effectively storing and retrieving over 3B rows of "saves" (a Wanelo equivalent of Instagram's "like" or Pinterest's "pin"), all in PostgreSQL, with highly concurrent random access.
Over the last three years, the team at Wanelo optimized the hell out of their application and database stacks. Using PostgreSQL version 9 as their primary data store, Joyent Public Cloud as a hosting environment, the team re-architected their backend for rapid expansion several times over, as the unrelenting traffic kept climbing up. This ultimately resulted in a highly efficient, horizontally scalable, fault tolerant application infrastructure. Unimpressed? Now try getting there without the OPS or DBA teams, all while deploying seven times per day to production, with an application measuring 99.999% uptime over the last 6 months.
This document discusses techniques for improving the performance of mobile web applications. It addresses reducing the number of requests, reducing file sizes, and increasing parallelism. Specifically, it recommends bundling JavaScript and CSS files, inlining small resources, using adaptive images and JavaScript, minification, compression, and domain sharding. It also suggests techniques like parallelizing service calls and downloads, delaying unnecessary downloads, and eager loading of static assets. The overall goal is to reduce load times and improve the user experience on mobile networks.
World-class Data Engineering with Amazon RedshiftLars Kamp
These are the slides used in the Redshift training by intermix.io. This class introduces you to strategies and best practices for designing a data platform using Amazon Redshift.
For a link to the video, please contact nikola@intermix.io.
This document discusses common mistakes made in Oracle Business Intelligence development. It is organized by categories including the three layers of the RPD, system/DevOps/security issues, multidimensional modeling failures, front-end usage mistakes, and analysis/dashboard errors. Specific examples provided include using incorrect data types, not creating dimensional hierarchies, manual security management instead of roles, treating cubes like relational sources, and using OBI as an Excel exporting or data entry tool. The document is intended to review worst practices to improve core OBI development skills.
Free and useful tools have proliferated since the launch of the CodePlex and SourceForge websites. Join Kevin Kline, long-time author of the SQL Server Magazine column "Tool Time", as he profiles the very best of the free tools covered in his monthly column - dozens of free tools and utilities! Some of the cover tools help to:
- Track database growth
- Implement logging in SSIS job steps
- Stress test your database applications
- Automate important preventative maintenance tasks
- Automate maintenance tasks for Analysis Services
- Help protect against SQL Injection attacks
- Graphically manage Extended Events
- Utilize PowerShell scripts to ease administration
And much more. These tools are all free and independently supported by SQL Server enthusiasts around the world.
Your Guide to Streaming - The Engineer's PerspectiveIlya Ganelin
It feels like every week there's a new open-source streaming platform out there. Yet, if you only look at the descriptions, performance metrics, or even the architecture, they all start to look exactly the same! In short, nothing really differentiates itself - whether it be Storm, Flink, Apex, GearPumk, Samza, KafkaStreams, AkkaStreams, or any of the other myriad technologies. So if they all look the same, how do you really pick a streaming platform to solve the problem that YOU have? This talk is about how to really compare these platforms, and it turns out that they do have their key differences, they're just not the ones you usually think about. The way that you need to compare these systems if you're building something to last, a well-engineered system, is to look at how they handle durability, availability, how easy they are to install and use, and how they deal with failures.
LearnBop Blue Green AWS Deployments - October 2015Alec Lazarescu
This document discusses blue/green deployments for LearnBop, an online tutoring platform. It describes setting up two environments with separate load balancers to allow deploying new code without disrupting existing users. The key steps are: 1) attaching the production load balancer to the new environment, 2) detaching it from the old environment, 3) attaching the staging load balancer to the old environment, and 4) detaching it from the new environment. This allows rolling back quickly by reversing the process. CNAME record swaps are avoided to prevent users remaining on the old version indefinitely.
Deployment pipeline for Azure SQL DatabasesEduardo Piairo
The document discusses deployment pipelines for Azure SQL databases. It describes establishing pipelines that include source control, continuous integration, and continuous delivery to automate database deployments. Key aspects covered include using source control to manage database changes and migrations, validating changes through testing during integration, and deploying changes to target environments in a reliable and recoverable manner. The pipeline aims to improve speed and reliability of database releases while reducing human errors.
These slides cover a talk on using distributed computation for database queries. Moore's Law, Amdahl's Law and distribution techniques are highlighted, and a simple performance comparison is provided.
Shopzilla redesigned their architecture to improve performance and scalability. The new design simplified layers, utilized caching extensively, and applied best practices for front-end performance. This led to significant business benefits including a 7-12% increase in conversion rates, 8-120% increase in search engine sessions, and a 225% increase in development velocity. Performance testing was a key part of the new approach.
The key to a successful mobile site is high performance and reliability across a wide range of device capabilities and network latencies. However, the mobile web is a hostile environment with support for HTML5, JavaScript and CSS varying widely across browsers and devices. This talk will explain best practices to build high performance mobile sites that work across a wide range of devices and capabilities. The focus will be on lessons learnt at Betfair while rewriting the entire mobile web stack and how we used techniques to maximise performance and reliability. After discussing the problems faced in mobile the talk will explain how adaptive techniques can be used to provide progressive enhancement. This will be followed by an explanation of why and where performance bottlenecks occur and how these can be solved.
Vikram Oberoi presented lessons learned from using Hadoop in production at Meebo. He discussed how Meebo transitioned to using Hadoop for ETL and analytics due to the large volume of log data they process daily. He emphasized the importance of using a workflow manager like Azkaban to automate jobs and dependencies rather than scripts, and of using a backwards-compatible data serialization format like Protocol Buffers to avoid issues when data schemas change over time.
Building faster websites: web performance with WordPressJohannes Siipola
Nobody likes a slow website. Faster sites lead to happier users, and happier users lead to more conversions and revenue. That’s why you should take performance into account in your WordPress project. Learn what practical techniques and WordPress plugins to use in order to optimize your site for speed.
Hugh Brien of AppDynamics shares his Top 10 application issues he sees on a daily basis.
The list covers:
- Application Performance Monitoring
- Database Monitoring
- Java, .NET, Node.js, PHP, and Python Monitoring
- I/O
- And much more
This document discusses using virtualization and containers to improve database deployments in development environments. It notes that traditional database deployments are slow, taking 85% of project time for creation and refreshes. Virtualization allows for more frequent releases by speeding up refresh times. The document discusses how virtualization engines can track database changes and provision new virtual databases in seconds from a source database. This allows developers and testers to self-service provision databases without involving DBAs. It also discusses how virtualization and containers can optimize database deployments in cloud environments by reducing storage usage and data transfers.
Lessons from Highly Scalable Architectures at Social Networking SitesPatrick Senti
What are the techniques and technolgies used by popular social networking sites such as Facebook, Twitter, Tumblr, Pinterest or Instagram? How do they architect their systems to scale to multiples of 100 million of visits per day?
At Yahoo! over the past year we have helped migrate hundreds of our grids? users to YARN. Our YARN clusters have in aggregate run over 18 million jobs with more than 3 billion tasks consuming over 10 thousand years of compute time. With one single cluster running 90 thousand jobs a day. From this experience we would like to share what we have learned about running YARN well, how this is different from running a 1.0 based cluster, and what it takes to migrate your jobs to YARN from 1.0.
DAT316_Report from the field on Aurora PostgreSQL PerformanceAmazon Web Services
Tatsuo Ishii from SRA OSS has done extensive testing to compare the Aurora PostgreSQL-compatible Edition with standard PostgreSQL. In this session, he will present his performance testing results, and his work on Pgpool-II with Aurora; Pgpool-II is an open source tool which provides load balancing, connection pooling, and connection management for PostgreSQL.
Report from the Field on the PostgreSQL-compatible Edition of Amazon Aurora -...Amazon Web Services
Tatsuo Ishii from SRA OSS has done extensive testing to compare the Aurora PostgreSQL-compatible Edition with standard PostgreSQL. In this session, he will present his performance testing results, and his work on Pgpool-II with Aurora; Pgpool-II is an open source tool which provides load balancing, connection pooling, and connection management for PostgreSQL.
James D Bloom is a mobile web expert who focuses on high performance, reliability, wide device support, and keeping things simple. In his talk, he discusses why performance is important for mobile websites and provides strategies to improve network performance through reducing requests and bytes, increasing bandwidth efficiency, and reducing latency. He also discusses ways to improve software performance through more parallelism, faster page rendering, and faster page interaction.
From Obvious to Ingenius: Incrementally Scaling Web Apps on PostgreSQLKonstantin Gredeskoul
In this exciting and informative talk, presented at PgConf Sillicon Valley 2015, Konstantin cut through the theory to deliver a clear set of practical solutions for scaling applications atop PostgreSQL, eventually supporting millions of active users, tens of thousands concurrently, and with the application stack that responds to requests with a 100ms average. He will share how his team solved one of the biggest challenges they faced: effectively storing and retrieving over 3B rows of "saves" (a Wanelo equivalent of Instagram's "like" or Pinterest's "pin"), all in PostgreSQL, with highly concurrent random access.
Over the last three years, the team at Wanelo optimized the hell out of their application and database stacks. Using PostgreSQL version 9 as their primary data store, Joyent Public Cloud as a hosting environment, the team re-architected their backend for rapid expansion several times over, as the unrelenting traffic kept climbing up. This ultimately resulted in a highly efficient, horizontally scalable, fault tolerant application infrastructure. Unimpressed? Now try getting there without the OPS or DBA teams, all while deploying seven times per day to production, with an application measuring 99.999% uptime over the last 6 months.
This document discusses techniques for improving the performance of mobile web applications. It addresses reducing the number of requests, reducing file sizes, and increasing parallelism. Specifically, it recommends bundling JavaScript and CSS files, inlining small resources, using adaptive images and JavaScript, minification, compression, and domain sharding. It also suggests techniques like parallelizing service calls and downloads, delaying unnecessary downloads, and eager loading of static assets. The overall goal is to reduce load times and improve the user experience on mobile networks.
World-class Data Engineering with Amazon RedshiftLars Kamp
These are the slides used in the Redshift training by intermix.io. This class introduces you to strategies and best practices for designing a data platform using Amazon Redshift.
For a link to the video, please contact nikola@intermix.io.
This document discusses common mistakes made in Oracle Business Intelligence development. It is organized by categories including the three layers of the RPD, system/DevOps/security issues, multidimensional modeling failures, front-end usage mistakes, and analysis/dashboard errors. Specific examples provided include using incorrect data types, not creating dimensional hierarchies, manual security management instead of roles, treating cubes like relational sources, and using OBI as an Excel exporting or data entry tool. The document is intended to review worst practices to improve core OBI development skills.
Free and useful tools have proliferated since the launch of the CodePlex and SourceForge websites. Join Kevin Kline, long-time author of the SQL Server Magazine column "Tool Time", as he profiles the very best of the free tools covered in his monthly column - dozens of free tools and utilities! Some of the cover tools help to:
- Track database growth
- Implement logging in SSIS job steps
- Stress test your database applications
- Automate important preventative maintenance tasks
- Automate maintenance tasks for Analysis Services
- Help protect against SQL Injection attacks
- Graphically manage Extended Events
- Utilize PowerShell scripts to ease administration
And much more. These tools are all free and independently supported by SQL Server enthusiasts around the world.
Your Guide to Streaming - The Engineer's PerspectiveIlya Ganelin
It feels like every week there's a new open-source streaming platform out there. Yet, if you only look at the descriptions, performance metrics, or even the architecture, they all start to look exactly the same! In short, nothing really differentiates itself - whether it be Storm, Flink, Apex, GearPumk, Samza, KafkaStreams, AkkaStreams, or any of the other myriad technologies. So if they all look the same, how do you really pick a streaming platform to solve the problem that YOU have? This talk is about how to really compare these platforms, and it turns out that they do have their key differences, they're just not the ones you usually think about. The way that you need to compare these systems if you're building something to last, a well-engineered system, is to look at how they handle durability, availability, how easy they are to install and use, and how they deal with failures.
LearnBop Blue Green AWS Deployments - October 2015Alec Lazarescu
This document discusses blue/green deployments for LearnBop, an online tutoring platform. It describes setting up two environments with separate load balancers to allow deploying new code without disrupting existing users. The key steps are: 1) attaching the production load balancer to the new environment, 2) detaching it from the old environment, 3) attaching the staging load balancer to the old environment, and 4) detaching it from the new environment. This allows rolling back quickly by reversing the process. CNAME record swaps are avoided to prevent users remaining on the old version indefinitely.
Deployment pipeline for Azure SQL DatabasesEduardo Piairo
The document discusses deployment pipelines for Azure SQL databases. It describes establishing pipelines that include source control, continuous integration, and continuous delivery to automate database deployments. Key aspects covered include using source control to manage database changes and migrations, validating changes through testing during integration, and deploying changes to target environments in a reliable and recoverable manner. The pipeline aims to improve speed and reliability of database releases while reducing human errors.
These slides cover a talk on using distributed computation for database queries. Moore's Law, Amdahl's Law and distribution techniques are highlighted, and a simple performance comparison is provided.
Shopzilla redesigned their architecture to improve performance and scalability. The new design simplified layers, utilized caching extensively, and applied best practices for front-end performance. This led to significant business benefits including a 7-12% increase in conversion rates, 8-120% increase in search engine sessions, and a 225% increase in development velocity. Performance testing was a key part of the new approach.
The key to a successful mobile site is high performance and reliability across a wide range of device capabilities and network latencies. However, the mobile web is a hostile environment with support for HTML5, JavaScript and CSS varying widely across browsers and devices. This talk will explain best practices to build high performance mobile sites that work across a wide range of devices and capabilities. The focus will be on lessons learnt at Betfair while rewriting the entire mobile web stack and how we used techniques to maximise performance and reliability. After discussing the problems faced in mobile the talk will explain how adaptive techniques can be used to provide progressive enhancement. This will be followed by an explanation of why and where performance bottlenecks occur and how these can be solved.
Vikram Oberoi presented lessons learned from using Hadoop in production at Meebo. He discussed how Meebo transitioned to using Hadoop for ETL and analytics due to the large volume of log data they process daily. He emphasized the importance of using a workflow manager like Azkaban to automate jobs and dependencies rather than scripts, and of using a backwards-compatible data serialization format like Protocol Buffers to avoid issues when data schemas change over time.
Building faster websites: web performance with WordPressJohannes Siipola
Nobody likes a slow website. Faster sites lead to happier users, and happier users lead to more conversions and revenue. That’s why you should take performance into account in your WordPress project. Learn what practical techniques and WordPress plugins to use in order to optimize your site for speed.
Hugh Brien of AppDynamics shares his Top 10 application issues he sees on a daily basis.
The list covers:
- Application Performance Monitoring
- Database Monitoring
- Java, .NET, Node.js, PHP, and Python Monitoring
- I/O
- And much more
This document discusses using virtualization and containers to improve database deployments in development environments. It notes that traditional database deployments are slow, taking 85% of project time for creation and refreshes. Virtualization allows for more frequent releases by speeding up refresh times. The document discusses how virtualization engines can track database changes and provision new virtual databases in seconds from a source database. This allows developers and testers to self-service provision databases without involving DBAs. It also discusses how virtualization and containers can optimize database deployments in cloud environments by reducing storage usage and data transfers.
Lessons from Highly Scalable Architectures at Social Networking SitesPatrick Senti
What are the techniques and technolgies used by popular social networking sites such as Facebook, Twitter, Tumblr, Pinterest or Instagram? How do they architect their systems to scale to multiples of 100 million of visits per day?
At Yahoo! over the past year we have helped migrate hundreds of our grids? users to YARN. Our YARN clusters have in aggregate run over 18 million jobs with more than 3 billion tasks consuming over 10 thousand years of compute time. With one single cluster running 90 thousand jobs a day. From this experience we would like to share what we have learned about running YARN well, how this is different from running a 1.0 based cluster, and what it takes to migrate your jobs to YARN from 1.0.
DAT316_Report from the field on Aurora PostgreSQL PerformanceAmazon Web Services
Tatsuo Ishii from SRA OSS has done extensive testing to compare the Aurora PostgreSQL-compatible Edition with standard PostgreSQL. In this session, he will present his performance testing results, and his work on Pgpool-II with Aurora; Pgpool-II is an open source tool which provides load balancing, connection pooling, and connection management for PostgreSQL.
Report from the Field on the PostgreSQL-compatible Edition of Amazon Aurora -...Amazon Web Services
Tatsuo Ishii from SRA OSS has done extensive testing to compare the Aurora PostgreSQL-compatible Edition with standard PostgreSQL. In this session, he will present his performance testing results, and his work on Pgpool-II with Aurora; Pgpool-II is an open source tool which provides load balancing, connection pooling, and connection management for PostgreSQL.
Amazon Aurora is a relational database service that is compatible with MySQL and PostgreSQL databases. It is fully managed by AWS and provides faster performance than MySQL databases at lower costs. Aurora provides high availability across three availability zones and automatic failover. It is easy to migrate existing MySQL databases to Aurora using AWS database migration services. Aurora is optimized for the cloud and leverages other AWS services like DynamoDB and S3 for storage. It has a simple pricing model based on the instance size and storage used.
As organizations invest in DevOps to release more frequently, there’s a need to treat the database tier as an integral part of your automated delivery pipeline – to build, test and deploy database changes just like any other part of your application.
However, databases (particularly RDBMS) are different from source code, and pose unique challenges to Continuous Delivery - especially in the context of deployments. Often, code changes require updating or migrating the database before the application can be deployed. A deployment method that works for installing a small database or a green-field application may not be suitable for industrial-scale databases. Updating the database can be more demanding than updating the app layer: database changes are more difficult to test, and rollbacks are harder. Furthermore, for organizations who strive to minimize service interruption to end users, database updates with no-downtime are a laborious operation.
Your DB stores the most mission-critical and sensitive data of your organization (transaction data, business data, user information, etc.). As you update your database, you’d want to ensure data integrity, ACID, data retention, and have a solid rollback strategy - in case things go wrong …
This talk covers strategies for database deployments and rollbacks:
• What are some patterns and best practices for reliably deploying databases as part of your CD pipeline?
• How do you safely rollback database code?
• How do you ensure data integrity?
• What are some best practices for handling advanced scenarios and backend processes, such as scheduled tasks, ETL routines, replication architecture, linked databases across distributed infrastructure, and more.
• How to handle legacy database, alongside more modern data management solutions?
Workshop on Advanced Design Patterns for Amazon DynamoDB - DAT405 - re:Invent...Amazon Web Services
Join us for the first-ever Amazon DynamoDB practical hands-on workshop. This session is designed for developers, engineers, and database administrators who are involved in designing and maintaining DynamoDB applications. We begin with a walkthrough of proven NoSQL design patterns for at-scale applications. Next, we use step-by-step instructions to apply lessons learned to design DynamoDB tables and indexes that are optimized for performance and cost. Expect to leave this session with the knowledge to build and monitor DynamoDB applications that can grow to any size and scale. Attendees should have a basic understanding of DynamoDB. To attend this workshop, bring your laptop.
Scylla Summit 2022: Scylla 5.0 New Features, Part 1ScyllaDB
Discover the new features and capabilities of Scylla Open Source 5.0 directly from the engineers who developed it. This second block of lightning talks will cover the following topics:
- New IO Scheduler and Disk Parallelism
- Per-Service-Level Timeouts
- Better Workload Estimation for Backpressure and Out-of-Memory Conditions
- Large Partition Handling Improvements
- Optimizing Reverse Queries
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
AWS Summit 2014 Melbourne - Breakout 5
Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud.
Presenter: Craig Dickson, Solutions Architect, Amazon Web Services
Hangfire
An easy way to perform background processing in .NET and .NET Core applications. No Windows Service or separate process required.
Why Background Processing?
Lengthy operations like updating lot of records in DB
Checking every 2 hours for new data or files
Invoice generation at the end of every billing period
Monthly Reporting
Rebuild data, indexes or search-optimized index after data change
Automatic subscription renewal
Regular Mailings
Send an email due to an action
Background service provisioning
20141206 4 q14_dataconference_i_am_your_dbhyeongchae lee
The document discusses scaling databases and provides an overview of different database scaling techniques. It begins with introductions to the presenter and databases that scale before covering techniques like read caching, write coalescing, connection scaling, master-slave replication, vertical and horizontal partitioning. Specific databases that scale like Amazon Aurora are also mentioned. Real-world examples of scaling stories and the presenter's experience scaling MySQL are provided.
The document discusses the rise of NoSQL databases. It notes that NoSQL databases are designed to run on clusters of commodity hardware, making them better suited than relational databases for large-scale data and web-scale applications. The document also discusses some of the limitations of relational databases, including the impedance mismatch between relational and in-memory data structures and their inability to easily scale across clusters. This has led many large websites and organizations handling big data to adopt NoSQL databases that are more performant and scalable.
The document discusses scaling a web application called Wanelo that is built on PostgreSQL. It describes 12 steps for incrementally scaling the application as traffic increases. The first steps involve adding more caching, optimizing SQL queries, and upgrading hardware. Further steps include replicating reads to additional PostgreSQL servers, using alternative data stores like Redis where appropriate, moving write-heavy tables out of PostgreSQL, and tuning PostgreSQL and the underlying filesystem. The goal is to scale the application while maintaining PostgreSQL as the primary database.
This document discusses the rise of NoSQL databases as an alternative to traditional relational databases. It covers the constraints and scaling issues that led to NoSQL, examples of NoSQL categories (key-value, document, column, and graph databases), and how NoSQL systems sacrifice ACID compliance in favor of BASE properties like eventual consistency in order to improve scalability and performance. The document also discusses the CAP theorem and how NoSQL databases allow for partition tolerance over consistency or availability.
IBM Connections – Managing Growth and ExpansionLetsConnect
You are lucky, your Connections platform is experiencing rapid growth – now what? How to you determine when you have grown to where you need to build out the service? How do you grow WebSphere or the File Service Space? How do you add additional Web Servers or is it better to add a proxy server? Learn how to judge and decide what you need to change – and how to then implement it.
Case Study: Sprinklr Uses Amazon EBS to Maximize Its NoSQL Deployment - DAT33...Amazon Web Services
Sprinklr delivers a complete social media management system for the enterprise. It also helps the world’s largest brands do marketing, advertising, care, sales, research, and commerce on Facebook, Twitter, LinkedIn, and 21 other channels on a global level. This is all done on a single integrated platform. In this session, you learn about Sprinklr’s journey to the cloud and discover how to optimize your NoSQL database on AWS for cost, efficiency, and scale. We also do dive deep into best practices and architectural considerations for designing and managing NoSQL databases, such as Apache Cassandra, MongoDB, Apache CouchDB, and Aerospike on Amazon EC2 and Amazon EBS. We share best practices for instance and volume selection, provide performance tuning hints, and describe cost optimization techniques throughout.
The document discusses availability and reliability in distributed systems. It describes that for a system to be truly reliable, it must be fault-tolerant, highly available, recoverable, consistent, scalable, have predictable performance, and be secure. It then discusses how the namenode is a single point of failure in Hadoop, and describes various approaches to improve availability through replicating metadata and using secondary or backup nodes.
Geek Sync | Planning a SQL Server to Azure Migration in 2021 - Brent OzarIDERA Software
The document discusses planning a SQL Server migration to Azure. It outlines four key steps: 1) Choosing an Azure target service; 2) Working around unavailable services; 3) Provisioning appropriate hardware resources; and 4) Tuning performance once in Azure. Common challenges include agent jobs, cross-database transactions, and adjusting to Azure's standardized hardware configurations and throughput limits. The document recommends starting with a "lift and shift" migration to VMs for initial simplicity.
Stop validating user input like a rookieRoger Pence
The document discusses server-side validation of user input using data annotations and the .NET Validator class. It describes how the Validator class can validate all properties of an object if they are decorated with the appropriate data annotation attributes. The TryValidateObject method of the Validator class is used to validate an object by passing in the object, a ValidationContext, a collection to hold errors, and a boolean to control validation scope.
The document discusses new features in ASNA Visual RPG and DataGate version 12.0, including:
- Support for Visual Studio 2013 and formal end of support for versions older than 11.2.
- Options for installing multiple versions of Visual Studio and AVR to allow transitioning applications to newer runtimes.
- New language features in AVR 12.0 like conditional expressions, automatic properties, and BegUsing/EndUsing blocks to safely use disposable objects.
- Recommendations for upgrading to AVR 12.0 like removing older versions first and installing Visual Studio 2013 before AVR 12.0.
All you know about ASP.NET deployment is wrong!Roger Pence
The document discusses improving the ASP.NET deployment process by creating an automated build process using MSBuild. Currently, many developers manually click "Build->Publish Site" and hope for the best. The proposed process involves: 1) Minimizing and concatenating JavaScript and CSS files to improve page load times; 2) Creating Web deployment profiles and transforms to ensure the correct configuration for production; 3) Using MSBuild to automate the overall build and deployment process by defining tasks like compilation, file copying/deleting, and more. This proposed process aims to make deployment automated, inclusive of necessary tasks, and provide good logs for troubleshooting.
The document discusses debugging JavaScript and CSS in browser developer tools. It explains that Chrome, Firefox, and Internet Explorer all provide built-in developer tools that can be accessed with keyboard shortcuts or from the browser menu. It recommends using Firefox and Chrome's additional plugins like Firebug and Web Developer to augment the debugging experience. Internet Explorer is considered the weakest of the three browsers for development.
Using formal testing to make better AVR appsRoger Pence
The document discusses unit testing for .NET applications developed with Visual Studio and AVR. It defines what unit testing is and best practices for writing good unit tests. It recommends starting a new AVR class library project and adding references and using statements to enable unit testing. An example test class and method are provided to demonstrate how to mark classes and methods for unit testing.
This document discusses strategic guidance for legacy AVR Classic applications. It notes that many customers have developed substantial Windows applications with AVR Classic that are widely used but aging. The document outlines ASNA's 5 R strategy for future-proofing these applications: Relax, Refine, Restrain, Reinvest, and Reimagine. It provides details on each step, such as getting applications to a code complete state, minimizing dependencies, understanding COM and .NET interoperability, restraining changes to old code, reinvesting in skills, and more. The overall strategy is to protect legacy applications while preparing teams for future development.
The document discusses solving a problem with determining row and button clicks in a GridView control in .NET. It describes using a decorator class pattern to extend the GridView and add functionality to determine both the row and button clicked by setting the command argument and handling the row command event. This allows having multiple button columns while still knowing the row and button clicked without sub-classing the GridView.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
DECODING JAVA THREAD DUMPS: MASTER THE ART OF ANALYSISTier1 app
Are you ready to unlock the secrets hidden within Java thread dumps? Join us for a hands-on session where we'll delve into effective troubleshooting patterns to swiftly identify the root causes of production problems. Discover the right tools, techniques, and best practices while exploring *real-world case studies of major outages* in Fortune 500 enterprises. Engage in interactive lab exercises where you'll have the opportunity to troubleshoot thread dumps and uncover performance issues firsthand. Join us and become a master of Java thread dump analysis!
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
The Rising Future of CPaaS in the Middle East 2024Yara Milbes
Explore "The Rising Future of CPaaS in the Middle East in 2024" with this comprehensive PPT presentation. Discover how Communication Platforms as a Service (CPaaS) is transforming communication across various sectors in the Middle East.
A Comprehensive Guide on Implementing Real-World Mobile Testing Strategies fo...kalichargn70th171
In today's fiercely competitive mobile app market, the role of the QA team is pivotal for continuous improvement and sustained success. Effective testing strategies are essential to navigate the challenges confidently and precisely. Ensuring the perfection of mobile apps before they reach end-users requires thoughtful decisions in the testing plan.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
Oracle 23c New Features For DBAs and Developers.pptx
Using connection pooling for better AVR Web performance
1. ASNApalooza2007ASNApalooza2007
by Roger Pence
ASNA Education Director
Maximum velocity: How to get theMaximum velocity: How to get the
most performance from your AVRmost performance from your AVR
appsapps
Paying attention to just a few details
can make your programs faster and
more efficient
1
4. ASNApalooza2007ASNApalooza2007
What is connection pooling?What is connection pooling?
• Connection pooling is DataGate’s ability to
“reuse” database server jobs between
stateless IO requests
• Connection pooling applies mostly to stateless
Web applications or Web services.
• Rarely is connection pooling helpful to
Windows-based programs
– It’s not likely to hurt anything, but connection
pooling is especially suited for stateless apps
5. ASNApalooza2007ASNApalooza2007
Connection pooling: the System iConnection pooling: the System i
• Effective connection pooling is especially
important if the System i is your database
platform
• Without connection pooling, a new OS/400 job
must be created for each request. This can take
3-10 seconds—depending on System i load and
configuration
• With connection pooling, an OS/400 job is made
available to a request in a matter of milliseconds
6. ASNApalooza2007ASNApalooza2007
Connection pooling: SQL ServerConnection pooling: SQL Server
• Although instancing a database connection isn’t
quite the performance drain on SQL Server that it
is on the System i, it’s still a drain
• It may also positively effect SQL Server licensing
– Because you are in effect also pooling licensing (for
CAL-based licenses)
– Connection pooling keeps any one connection is play
for the minimum amount of time
• This discussion is primarily System i-centric, but
everything said here about connection pooling
also applies to DataGate for SQL Server
7. ASNApalooza2007ASNApalooza2007
Enabling connection poolingEnabling connection pooling
• Connection pooling can be enabled statically
through database name attributes
• Or set dynamically through changing the
database object’s PoolingTimeOut property
prior to connecting that object
9. ASNApalooza2007ASNApalooza2007
Enabling connection poolingEnabling connection pooling
dynamicallydynamically
• Connection pooling can also be controlled by
setting the database object’s PoolingTimeout
value.
• Any value greater than zero sets the connection
pooling timeout value (in minutes); zero disables
connection pooling
10. ASNApalooza2007ASNApalooza2007
What’s a good time-out value?What’s a good time-out value?
• That depends.
• For many shops, something in the range of 20
minutes or so works well
• However, we have customers that use much
larger values if the latency between pages is
extreme
– Shop floor browser-based applications for
example
11. ASNApalooza2007ASNApalooza2007
How does connection pooling it work?How does connection pooling it work?
• Red zone= inactive pooled jobs
• Green zone = active jobs
• At this point, application has no jobs pooled
12. ASNApalooza2007ASNApalooza2007
Blue and pink: cool and warmBlue and pink: cool and warm
• For purposes of this discussion:
– Blue users get new, previously unpooled jobs.
These uses wait 3-7 seconds for their page.
– Blue jobs are new, previously unpooled jobs.
– Pink users get a pooled job. These users wait just
a few milliseconds for their page.
– Pink jobs are previously pooled jobs.
Think of it this way, blue users/jobs are cool;
pink users/jobs are warm!
13. ASNApalooza2007ASNApalooza2007
A user request occursA user request occurs
• DataGate first looks in the red job zone for a
previously pooled job. There isn’t one. DataGate
starts a new one.
14. ASNApalooza2007ASNApalooza2007
The job is now pooledThe job is now pooled
• When the server is done servicing the user’s request,
the job is “moved” to the pooled zone
• Pooled jobs have no open files, no activity, and
consume very few OS/400 resources
16. ASNApalooza2007ASNApalooza2007
The job is put back in the poolThe job is put back in the pool
• When the server is done servicing the user’s request,
the job is “moved” to the pooled zone again
17. ASNApalooza2007ASNApalooza2007
Two users request a page within 1 MSTwo users request a page within 1 MS
of each otherof each other
• The first user gets the previously pooled job
• The second user has to wait for OS/400 to create a
new job
19. ASNApalooza2007ASNApalooza2007
Three users request a page within aThree users request a page within a
couple of MS of each othercouple of MS of each other
• Two user get a pooled job; the third user waits for a new job
• For most Web apps, jobs aren’t unique to a user, so the two
pooled jobs are avaiable to the first two users—independent
of who previously used the two pooled jobs
20. ASNApalooza2007ASNApalooza2007
Now three jobs are pooledNow three jobs are pooled
• The inactive job pool is now somewhat populated
• Lots of users can come and go through these
three jobs
21. ASNApalooza2007ASNApalooza2007
The benefit of a populated pooledThe benefit of a populated pooled
zonezone
• As the pooled zone gets populated, it less
likely that new jobs will be needed
• After get five or six pooled, what are the
chances that five or six users will each request
a page within a few hundred or so
milliseconds of each other?
• Of course, it could happen. And if it does, a
new job is added and the pooled zone grows
by one
22. ASNApalooza2007ASNApalooza2007
20 or 30 to one!20 or 30 to one!
• ASNA benchmarks show that you can expect
at least 20 or 30 users able to do “normal”
work using a single pooled job
• As the load grows, so grows the pooled zone
• But pooled jobs have a very low impact on the
System i
22
Lots of users busy with few jobs
23. ASNApalooza2007ASNApalooza2007
How does the pooled zone get clearedHow does the pooled zone get cleared
outout
• By job timeout value
– First in, first out
• For example with no activity, a job drops out of the
pooled zone when its timeout value expires
24. ASNApalooza2007ASNApalooza2007
How to tell if it’s workingHow to tell if it’s working
• Performance is a very good way to tell if
connection pooling is working
• However, another wau to ensure it’s working
is to test you’re app, through several pages,
knowing only one user is using it
• If, after using the app for a while you have
more than one OS/400 job active, something
is amiss
24
25. Something is wrong here!
There are three jobs for a
application with a single user!
26. ASNApalooza2007ASNApalooza2007
Something else to watch forSomething else to watch for
• It’s possible to be using connection pooling
correctly and still get more than one job active
• This can happen when a Web page uses one
or more secondary classes where each of
which establishes its own DB connection
• This topic is out of the scope of this
presentation, but if you think this is happening
to you, ask me offline for a copy of the
Singleton DB pattern explanation
26
27. ASNApalooza2007ASNApalooza2007
The rules of connection poolingThe rules of connection pooling
• Enable it either statically or dynamically
• The next point is very important!
• Each page must following this cycle:
– Connect the DB
– Open files
– Do work
– Close files
– Disconnect the DB
27
28. ASNApalooza2007ASNApalooza2007
Your app must disconnect for everyYour app must disconnect for every
page!page!
• A job gets moved back to the pooled zone
when it’s disconnect
• If you don’t disconnect, you aren’t pooling
jobs!
• The Page’s Unload event is a good place to put
your closes and disconnect
28
30. ASNApalooza2007ASNApalooza2007
Don’t forget to close files!Don’t forget to close files!
• If you disconnect but don’t close files, you
cause lots of files
to be opened!
• Close *All
• then disconnect
30
31. ASNApalooza2007ASNApalooza2007
Remember this!Remember this!
• If you don’t close files and disconnect the DB
object before the page goes out of scope, you
are not using connection pooling!
• The same advice applies to Web methods.
Close files and disconnect the DB after every
Web method call
31
32. ASNApalooza2007ASNApalooza2007
Your app must disconnect for everyYour app must disconnect for every
page!page!
• A job gets moved back to the pooled zone
when it’s disconnected
• If you don’t disconnect, it’s an orphan active
job
32
33. ASNApalooza2007ASNApalooza2007
Pooled connection scopePooled connection scope
• Pooled connections are uniquely identified by
all of the properties of a database name.
• Thus, if each user is signing on to your Web
app with her user profile and password, you
are negating the scalable benefits of
connection pooling
– You are however, letting users get their jobs back
quickly
33
34. ASNApalooza2007ASNApalooza2007
Pooled connection scopePooled connection scope
• Some shops can’t use a single database name
for all users (ie, a single user ID and
password). Thanks SOX.
• Consider using three or database names to
group users (perhaps by security or function)
• This way you’re at least getting some
scalability by letting each group share pooled
jobs
34
35. ASNApalooza2007ASNApalooza2007
Limiting database platform jobsLimiting database platform jobs
• When you instance a class that has a DB
object, and that class connects, that class gets
its own job on the database platform
• This means that it’s very easy for any one
program (Windows or Web) to have several
jobs in play for that one program
• This… is not good!
35
37. ASNApalooza2007ASNApalooza2007
An example with two classesAn example with two classes
• Two classes, two DBs, two connects = two jobs
37
This is especially
troubling on the
System i.
Scalability goes
out the window
if you aren’t
careful!
38. ASNApalooza2007ASNApalooza2007
The solution: the singleton DB patternThe solution: the singleton DB pattern
• A parent class
passes its job
to its children,
the children to
their children
• Results in just
one job for the
instance of the
program
38
40. 40
To limit the database platform one job, the parent
class needs to pass its DB object around to its
children (and them to their children if they have any.
41. ASNApalooza2007ASNApalooza2007
There are just a few rulesThere are just a few rules
• The parent, top level class, must pass its DB
object on
• The parent must disconnect the DB
• Child can never disconnect
– Others might want the job
• Children can connection the DB, but should
check to see if it needs to be
– Others might have previously connected it
41
42. ASNApalooza2007ASNApalooza2007
The singleton DB pattern is well-The singleton DB pattern is well-
documenteddocumented
• The downloads for Palooza (associated with
this session) includes a detailed description of
the singleton DB pattern
• Factor it into all of your programs!
• If you don’t, you are risking scalability on your
database server
42
43. ASNApalooza2007ASNApalooza2007
Beware shared DB connections!Beware shared DB connections!
• Do not share DB connections in Web
applications
• This forces all users to wait on a single thread
for DB connections
• This is guaranteed to make you stay up late at
night
43
44. ASNApalooza2007ASNApalooza2007
Consider (carefully) if using threadingConsider (carefully) if using threading
can help your application performancecan help your application performance
• Spinning off certain processes can help
application performance
• One place where using threading is effective is
to cause Web services to fire off a thread (to
start a long-running process) and quickly
return
• Here is an abbreviated example…
46. ASNApalooza2007ASNApalooza2007
In summaryIn summary
• Use connection pooling. Properly!
• Dispatch the KeyList where it makes sense
• Use SetRange/ReadRange instead of
SETLL/READE
• Populate lists from read-only files
• Don’t persist record locks longer than
necessary
46
Editor's Notes
You could argue that connection pooling isn’t directly related to file IO. We in ASNA’s tech support know better. We deal with issues nearly weekly that are reported as poor file IO and the culprit is almost always found to be improper use of connection pooling. Without effective connection pooling, your Web applications will wait several unnecessary seconds for their pages to appear.
In most cases, less than 5 milliseconds.
CAL = client access license where SQL Server use is licensed to a client
Each time this database name is connected, it has connection enabled and has a 20 connection pooling timeout value.
For a shop floor application, the operator returns to the screen infrequently during the day. If you want to pool jobs for the best performance for this type of user, you probably want something longer than 20 minutes.
Let’s consider a typical Web application. It’s early in the morning, and there having been any users on the Web app for quite a while. Nothing is currently in memory for the Web app and it has no pooled System i jobs available. The jobs in the green “eggcrate” are active pooled jobs; the jobs in the red “eggcrate” are inactive pooled jobs.
Let’s see how it works.
This user waits 4-7 seconds for her first page.
This user waits 4-7 seconds for her first page.
There just barely a wait for this page for this pink (warm) user.
This user waits 4-7 seconds for her first page.
Given typical database connections for Web apps, the concept of the “first” user is a little interesting. In this case, it means the first of the two to request a job. In other words, the pooled job goes to asks for it first, not who used it last.
The pinkt user gets her page very quickly, the
While you get a job back in a matter of milliseconds, there is still other work to be done for any given page. Most pages will complete in a matter of a few hundred milliseconds, at the most.
Each job expires after x minutes of inactivity from when it was last active.
Something is wrong here!
If you forget to close files do disconnect, the opened files are not closed for you. And. those open file handles aren’t reused across job instances. Close files, too!
It’s even worse than that! Not only are you creating orphan jobs—the user is always cool and always has to wait for a new job! These orphan jobs sit unused in the active job zone until they timeout.
If each user uses a unique user ID and password, each user gets her own privately pooled job.
SOX refers to the Sarbanes Oxley Act which, driven by the Enron fiasco, imposes very strict accountability on public businesses.