Downtime from planned maintenance or unplanned outages can hurt businesses. An abstraction layer like database load balancing software is critical to achieve zero downtime. It acts as a buffer between applications and databases, leveraging features like replication and failover to seamlessly direct traffic during outages. This prevents applications from crashing and allows servers to be taken offline without interrupting users.
Web application optimization techniques include optimizing at the application layer through database optimization, query caching, and code caching. Optimization also occurs at the presentation layer through cache control and minifying web content. Browsers cache resources using headers and validation of cached resources occurs periodically. Tools like Yslow and Firebug can analyze performance, and web servers can be tuned through expiration headers, gzip/deflate compression, and other techniques.
This document discusses the browser performance analysis tool dynaTrace. It provides an overview of dynaTrace's capabilities such as cross-browser diagnostics, code-level visibility, and deep JavaScript and DOM tracing. It also covers key performance indicators (KPIs) like load time, resource usage, and network connections that dynaTrace measures. Best practices for improving performance, such as browser caching, network optimization, JavaScript handling and server-side performance are outlined. The document aims to explain why and how dynaTrace can help users find and address web performance issues.
This document discusses Oracle's In-Memory Database Cache and TimesTen in-memory database. It provides an overview of how the cache works, including options for read-only or updatable caches, automatic synchronization with Oracle Database, and scaling out the cache on multiple nodes. Tools are mentioned for managing cache groups, monitoring performance, and integrating with Oracle products like SQL Developer. The in-memory database provides extreme performance, high availability, and scalability.
8 cloud design patterns you ought to know - Update Conference 2018Taswar Bhatti
This document discusses 8 cloud design patterns: External Configuration, Cache Aside, Federated Identity, Valet Key, Gatekeeper, Circuit Breaker, Retry, and Strangler. It provides an overview of each pattern, including what problem it addresses, when to use it, considerations, and examples of cloud offerings that implement each pattern. It aims to help developers understand and apply common best practices for cloud application design.
The document discusses various techniques for optimizing UI performance, including optimizing caching, minimizing round-trip times, minimizing request size, minimizing payload size, and optimizing browser rendering. Specific techniques mentioned include leveraging browser and proxy caching, minimizing DNS lookups and redirects, combining external JavaScript, minimizing cookie and request size, enabling gzip compression, and optimizing images. Profiling and heap analysis tools are also discussed for diagnosing backend performance issues.
Downtime from planned maintenance or unplanned outages can hurt businesses. An abstraction layer like database load balancing software is critical to achieve zero downtime. It acts as a buffer between applications and databases, leveraging features like replication and failover to seamlessly direct traffic during outages. This prevents applications from crashing and allows servers to be taken offline without interrupting users.
Web application optimization techniques include optimizing at the application layer through database optimization, query caching, and code caching. Optimization also occurs at the presentation layer through cache control and minifying web content. Browsers cache resources using headers and validation of cached resources occurs periodically. Tools like Yslow and Firebug can analyze performance, and web servers can be tuned through expiration headers, gzip/deflate compression, and other techniques.
This document discusses the browser performance analysis tool dynaTrace. It provides an overview of dynaTrace's capabilities such as cross-browser diagnostics, code-level visibility, and deep JavaScript and DOM tracing. It also covers key performance indicators (KPIs) like load time, resource usage, and network connections that dynaTrace measures. Best practices for improving performance, such as browser caching, network optimization, JavaScript handling and server-side performance are outlined. The document aims to explain why and how dynaTrace can help users find and address web performance issues.
This document discusses Oracle's In-Memory Database Cache and TimesTen in-memory database. It provides an overview of how the cache works, including options for read-only or updatable caches, automatic synchronization with Oracle Database, and scaling out the cache on multiple nodes. Tools are mentioned for managing cache groups, monitoring performance, and integrating with Oracle products like SQL Developer. The in-memory database provides extreme performance, high availability, and scalability.
8 cloud design patterns you ought to know - Update Conference 2018Taswar Bhatti
This document discusses 8 cloud design patterns: External Configuration, Cache Aside, Federated Identity, Valet Key, Gatekeeper, Circuit Breaker, Retry, and Strangler. It provides an overview of each pattern, including what problem it addresses, when to use it, considerations, and examples of cloud offerings that implement each pattern. It aims to help developers understand and apply common best practices for cloud application design.
The document discusses various techniques for optimizing UI performance, including optimizing caching, minimizing round-trip times, minimizing request size, minimizing payload size, and optimizing browser rendering. Specific techniques mentioned include leveraging browser and proxy caching, minimizing DNS lookups and redirects, combining external JavaScript, minimizing cookie and request size, enabling gzip compression, and optimizing images. Profiling and heap analysis tools are also discussed for diagnosing backend performance issues.
How to boost performance of your rails app using dynamo db and memcachedAndolasoft Inc
DynamoDB and Memcached is a powerful combination for your Rails app. If you're looking to improve the performance of your Rails application, this may be the solution for you.
Azure database services for PostgreSQL and MySQLAmit Banerjee
The slide deck that Rachel and I had used to present on an overview of the managed PostgreSQL and MySQL service on Azure at SQL Saturday Redmond, 2018. This is part of the Azure Database family.
ThousandEyes provides network intelligence and monitoring of web performance. It offers different test types - HTTP server tests measure server response times, page load tests measure loading of full web pages in a browser, and web transaction tests measure performance of specific user interactions on a site. The tests provide metrics on response times, throughput, errors and performance of individual page components from different network locations and internet providers. The document recommends tips for optimizing web transactions such as adjusting timeouts, configuring start/stop steps, using XPath locators, and inserting wait conditions. It demonstrates creating and running page load, HTTP server and web transaction tests to monitor web performance.
Leveraging ApsaraDB to Deploy Business Data on the CloudOliver Theobald
This presentation walks you through the journey of launching your company's database on the cloud, and how to use ApsaraDB to reduce the cost of ownership. The presentation will provide an in-depth discussion of technical principles regarding the usage of cloud database technology.
From this webinar, you will also learn:
- How to implement cross-room disaster recovery deployment and ensure data consistency
- How to guarantee cloud data security
This document discusses various cloud design patterns for common problems such as retries, circuit breakers, throttling, leader election, and static content hosting. It provides examples of the retry pattern using services such as Azure Storage, SQL Database, Service Bus, Cache, and DocumentDB. Solutions for other patterns like queue-based load leveling, index tables, and valet keys are also briefly outlined.
10 performance and scalability secrets of ASP.NET websitesoazabir
1) ASP.NET requires optimizations at the code, database, and configuration levels to scale to millions of hits out of the box. Common optimizations include tweaking process model settings, removing unnecessary pipeline components, and using compiled Linq queries.
2) Issues like application-level DOS attacks, slow profile provider stored procedures, and Linq to SQL performance problems can be addressed to improve scalability. Using a CDN can also help offload static content delivery.
3) Database queries must consider index usage and transaction isolation levels to prevent timeouts and deadlocks under high load.
How to improve your apache web server’s performanceAndolasoft Inc
The performance of web application depends upon the performance of the web server and the database server. You can increase your web server’s performance either by adding additional hardware resources such as RAM, faster CPU etc.
What SQL DBAs need to know about SharePointJ.D. Wade
This document discusses what SQL DBAs need to know about implementing and managing SharePoint databases. It covers topics such as the SQL implementation challenges in SharePoint, recommended database configuration including storage, sizing, and high availability options. It also discusses maintenance best practices for SharePoint databases such as monitoring, integrity checks, index maintenance and shrinking databases. Upcoming enhancements in SharePoint 2010 that improve SQL integration are also mentioned.
This document discusses designing domain-driven microservices using CQRS patterns. It recommends modeling microservices around bounded contexts and aligning code to business problems. While DDD is useful for complex services, simpler architectures may suffice for CRUD services. The document also describes how to implement CQRS by separating read and write models, using commands for writes and queries for reads. This improves performance, scalability and permission management. The example architecture shows a gateway routing requests, with separate persistence and querying servers each making single database calls to minimize response times.
Modern Cloud Fundamentals: Misconceptions and Industry TrendsChristopher Bennage
A discussion of misconceptions, problems, and industry trends that hinder adoption of cloud technology; with an emphasis on scenarios that appear to work but fail at critical moments.
Be sure to read the notes!
The document discusses web servers and their key components and functions. It covers:
1) The definition of a web server as a program that generates and transmits responses to client requests for web resources by parsing requests, authorizing access, and constructing responses.
2) How web servers handle client requests through steps like parsing requests, authorizing access, and transmitting responses. They can also dynamically generate responses through server-side includes and server scripts.
3) Techniques web servers use like access control through authentication and authorization, passing data to scripts, using cookies, caching responses, and allocating resources through event-driven, process-driven, and hybrid architectures.
This document provides an overview of Azure SQL Managed Instance and how it compares to other Azure SQL options. It discusses how Managed Instance takes care of database management tasks like backups, high availability, and updates. It also summarizes the service tiers of General Purpose and Business Critical and their key features like storage performance and read replicas. Finally, it outlines approaches for migrating databases to Managed Instance using tools like DMA and restoring backups.
(ATS6-PLAT09) Deploying Applications on load balanced AEP servers for high av...BIOVIA
This document discusses deploying Accelrys Enterprise Platform (AEP) servers in a load balanced configuration for high availability. It recommends using a staging server to test configurations before deploying to production nodes. All nodes should be configured identically and share storage. A load balancer should be configured to distribute traffic evenly across nodes. Applications need to be packaged and deployed identically to each node to ensure consistency across the load balanced farm. Load balancing improves availability, scalability and performance but requires additional infrastructure and configuration.
The slides from this presentation were used in a live webinar that covered a variety of MongoDB Data Management topics including: How quickly can organizations recover from accidental data loss or ransomware, how to ensure compliance and security of PII when mirroring across different environments, specific architectural considerations to consider when running in a hybrid or pure cloud environment.
This document discusses database monitoring and how Docker containers can be used. It covers requirements for DB monitoring like scheduling, script libraries, deployment, alerts, stability, and security. It then discusses specific aspects of scheduling, deployment to target servers, alerts and actions. Finally, it summarizes what Trans App Data Technologies offers for DB monitoring including using Docker containers to provide features like high availability, auto-restart of containers, small footprints and centralized script management.
This document discusses monitoring CDN performance from the user to the edge server to the origin server using ThousandEyes. It provides an overview of CDN architecture and monitoring methods including benchmarking performance, ensuring the proper edge server is used, identifying cache issues, and setting alerts. Specific tips are provided on using response headers, customizing alerts, and demoing tests from the user to edge to origin.
How to boost performance of your rails app using dynamo db and memcachedAndolasoft Inc
DynamoDB and Memcached is a powerful combination for your Rails app. If you're looking to improve the performance of your Rails application, this may be the solution for you.
Azure database services for PostgreSQL and MySQLAmit Banerjee
The slide deck that Rachel and I had used to present on an overview of the managed PostgreSQL and MySQL service on Azure at SQL Saturday Redmond, 2018. This is part of the Azure Database family.
ThousandEyes provides network intelligence and monitoring of web performance. It offers different test types - HTTP server tests measure server response times, page load tests measure loading of full web pages in a browser, and web transaction tests measure performance of specific user interactions on a site. The tests provide metrics on response times, throughput, errors and performance of individual page components from different network locations and internet providers. The document recommends tips for optimizing web transactions such as adjusting timeouts, configuring start/stop steps, using XPath locators, and inserting wait conditions. It demonstrates creating and running page load, HTTP server and web transaction tests to monitor web performance.
Leveraging ApsaraDB to Deploy Business Data on the CloudOliver Theobald
This presentation walks you through the journey of launching your company's database on the cloud, and how to use ApsaraDB to reduce the cost of ownership. The presentation will provide an in-depth discussion of technical principles regarding the usage of cloud database technology.
From this webinar, you will also learn:
- How to implement cross-room disaster recovery deployment and ensure data consistency
- How to guarantee cloud data security
This document discusses various cloud design patterns for common problems such as retries, circuit breakers, throttling, leader election, and static content hosting. It provides examples of the retry pattern using services such as Azure Storage, SQL Database, Service Bus, Cache, and DocumentDB. Solutions for other patterns like queue-based load leveling, index tables, and valet keys are also briefly outlined.
10 performance and scalability secrets of ASP.NET websitesoazabir
1) ASP.NET requires optimizations at the code, database, and configuration levels to scale to millions of hits out of the box. Common optimizations include tweaking process model settings, removing unnecessary pipeline components, and using compiled Linq queries.
2) Issues like application-level DOS attacks, slow profile provider stored procedures, and Linq to SQL performance problems can be addressed to improve scalability. Using a CDN can also help offload static content delivery.
3) Database queries must consider index usage and transaction isolation levels to prevent timeouts and deadlocks under high load.
How to improve your apache web server’s performanceAndolasoft Inc
The performance of web application depends upon the performance of the web server and the database server. You can increase your web server’s performance either by adding additional hardware resources such as RAM, faster CPU etc.
What SQL DBAs need to know about SharePointJ.D. Wade
This document discusses what SQL DBAs need to know about implementing and managing SharePoint databases. It covers topics such as the SQL implementation challenges in SharePoint, recommended database configuration including storage, sizing, and high availability options. It also discusses maintenance best practices for SharePoint databases such as monitoring, integrity checks, index maintenance and shrinking databases. Upcoming enhancements in SharePoint 2010 that improve SQL integration are also mentioned.
This document discusses designing domain-driven microservices using CQRS patterns. It recommends modeling microservices around bounded contexts and aligning code to business problems. While DDD is useful for complex services, simpler architectures may suffice for CRUD services. The document also describes how to implement CQRS by separating read and write models, using commands for writes and queries for reads. This improves performance, scalability and permission management. The example architecture shows a gateway routing requests, with separate persistence and querying servers each making single database calls to minimize response times.
Modern Cloud Fundamentals: Misconceptions and Industry TrendsChristopher Bennage
A discussion of misconceptions, problems, and industry trends that hinder adoption of cloud technology; with an emphasis on scenarios that appear to work but fail at critical moments.
Be sure to read the notes!
The document discusses web servers and their key components and functions. It covers:
1) The definition of a web server as a program that generates and transmits responses to client requests for web resources by parsing requests, authorizing access, and constructing responses.
2) How web servers handle client requests through steps like parsing requests, authorizing access, and transmitting responses. They can also dynamically generate responses through server-side includes and server scripts.
3) Techniques web servers use like access control through authentication and authorization, passing data to scripts, using cookies, caching responses, and allocating resources through event-driven, process-driven, and hybrid architectures.
This document provides an overview of Azure SQL Managed Instance and how it compares to other Azure SQL options. It discusses how Managed Instance takes care of database management tasks like backups, high availability, and updates. It also summarizes the service tiers of General Purpose and Business Critical and their key features like storage performance and read replicas. Finally, it outlines approaches for migrating databases to Managed Instance using tools like DMA and restoring backups.
(ATS6-PLAT09) Deploying Applications on load balanced AEP servers for high av...BIOVIA
This document discusses deploying Accelrys Enterprise Platform (AEP) servers in a load balanced configuration for high availability. It recommends using a staging server to test configurations before deploying to production nodes. All nodes should be configured identically and share storage. A load balancer should be configured to distribute traffic evenly across nodes. Applications need to be packaged and deployed identically to each node to ensure consistency across the load balanced farm. Load balancing improves availability, scalability and performance but requires additional infrastructure and configuration.
The slides from this presentation were used in a live webinar that covered a variety of MongoDB Data Management topics including: How quickly can organizations recover from accidental data loss or ransomware, how to ensure compliance and security of PII when mirroring across different environments, specific architectural considerations to consider when running in a hybrid or pure cloud environment.
This document discusses database monitoring and how Docker containers can be used. It covers requirements for DB monitoring like scheduling, script libraries, deployment, alerts, stability, and security. It then discusses specific aspects of scheduling, deployment to target servers, alerts and actions. Finally, it summarizes what Trans App Data Technologies offers for DB monitoring including using Docker containers to provide features like high availability, auto-restart of containers, small footprints and centralized script management.
This document discusses monitoring CDN performance from the user to the edge server to the origin server using ThousandEyes. It provides an overview of CDN architecture and monitoring methods including benchmarking performance, ensuring the proper edge server is used, identifying cache issues, and setting alerts. Specific tips are provided on using response headers, customizing alerts, and demoing tests from the user to edge to origin.
This document contains a resume for Aruna Kumar K R, a linguist professional with over 7 years of experience in Kannada translation, proofreading, and editing. He has a master's degree in English literature and postgraduate diploma in translation. He has worked as a lead external linguist for Kannada on various projects for Google, providing translation from English to Kannada and vice versa, linguistic review, and language quality assurance. His skills include translation tools like Idiom Desktop Workbench and technical understanding of translation workflows.
The document discusses how hackers and open data are helping the city of Regina. It describes how the city has sponsored two hackathons where over 30 attendees created 30 applications, with about half using Regina's open data. The city benefits from open data by attracting developers to create applications with the data, reducing costs, and improving the city's image.
This document provides tips for how to hire skilled software developers, referred to as "hackers". It recommends focusing on candidates' experience, coding skills, and ability to improve rather than specific programming language experience. Employers should seek developers who can identify issues in code and appreciate code structure. The document advises attracting candidates by engaging with them in coding communities, hosting open houses, and demonstrating an interesting work environment and culture fit over technical skills alone.
Review of the history of web development and trends that indicate where the future of webdev is going.
Slides for a talk I gave at BarCamp Saskatoon - please refer to the notes for the actual slide content
Async code allows long-running operations like network and file access to execute without blocking the UI thread. There have been several approaches to async programming in .NET including the Async Programming Model (APM), Event-based Async Pattern (EAP), and Task Parallel Library. The newest approach is to use async and await keywords which allow suspending methods until async operations complete and make control flow easier to reason about.
E government solution by cotsys version 1-5COTSYS LTD
COTSYS is an IT services provider that offers e-government solutions to transform governments and make them IT-enabled. Its solutions include integrating government ministries and departments through a centralized online system, allowing citizens to access services and complete transactions online. COTSYS' phased approach moves governments from basic web presence to fully integrated online services across organizations. It promises benefits like cost savings, faster delivery, and acting as the client's virtual IT division to manage systems.
This document discusses using WSO2 products to enable interoperability between government organizations. It describes how paper-based exchange of documents between public agencies can be replaced by digital exchange. WSO2 Enterprise Service Bus and Governance Registry allow organizations to securely integrate and share data using standard integration patterns. This improves processes for citizens and businesses by automating document retrieval and validation between organizations digitally.
The document outlines the need to modernize the state government treasury system to provide online billing, payment, accounts, and management information capabilities. Key objectives of the new system include online receipt and payment of bills, release of funds, tax/non-tax payments, daily accounts, and comprehensive reporting. The new system will integrate treasuries, finance departments, accountant general offices, and other stakeholders. An implementation plan is proposed involving selecting a system integrator to develop and deploy the new Khajane II application along with necessary infrastructure and migration. A project monitoring unit is also recommended to oversee the modernization effort.
This document discusses biometric identification and its uses and challenges. It describes how biometrics like fingerprints, iris scans, and DNA can be used to identify individuals but also how current biometric systems have security flaws. Centralized biometric databases are vulnerable if hacked and fingerprints can be fooled. The document proposes a future where biometric hashes combined with passwords provide secure, anonymous digital identities without centralized databases's risks.
This document discusses architectures for e-government systems that effectively connect primary registers containing citizen data. It analyzes past failed attempts in Bulgaria called ESOED and RegiX, which used enterprise service bus (ESB) approaches. The document argues decentralized peer-to-peer (P2P) architectures like Estonia's X-Road protocol are better. X-Road uses security servers instead of a centralized ESB, supports subscription-based data exchange, and has enabled over 200 registers and 900 institutions to securely exchange 600 million transactions annually. The document concludes e-government systems should use standard protocols and components to access register data as a service, with an emphasis on simple, decentralized designs.
The document discusses e-government strategies and provides examples. It covers the following key points in 3 sentences:
E-government strategies aim to improve government services through technology. They require defining goals, assessing current systems, and implementing projects in phases while measuring outcomes. The document also provides an example of India's National e-Governance Plan which aims to deliver online services nationwide through local service centers over 8 years at a cost of $4 billion.
Huawei provides solutions for smart cities that address four megatrends: aging populations in developed nations, economic shifts to emerging countries, population growth concentrated in emerging nations, and increased urbanization worldwide. Huawei's smart city model focuses on creating a safe and orderly society, green and sustainable economy, and happy and healthy lives through technologies like emergency command centers, video surveillance, intelligent traffic systems, digital healthcare, and more. Case studies show how Huawei has implemented solutions for areas like e-government, safe cities, e-education, and e-health in countries around the world to address challenges from these megatrends and enable smarter, more efficient cities.
This document discusses strategies for building scalable and high-performing web applications. It explains that scalability refers to the ability to handle increased load by adding more resources, while performance refers to individual request response times. The key to scalable performance is distributing load across application tiers and optimizing each tier individually. Bottlenecks should be identified and addressed starting from the earliest possible tier. Common techniques include caching, database optimization, thread pool tuning, and horizontal scaling.
Configuring Apache Servers for Better Web PerormanceSpark::red
Apache is the most popular web server in the world, yet its default configuration can't handle high traffic. Learn how to setup Apache for high performance sites and leverage many of its available modules to deliver a faster web experience for your users. Discover how Apache can max out a 1 Gbps NIC and how to serve over 140,000 pages per minute with a small Apache cluster. This presentation was given by Spark::red's founding partner Devon Hillard in March 2012 at the Boston Web Performance Meetup.
Expect the unexpected: Prepare for failures in microservicesBhakti Mehta
My talk at Confoo 2016 Montreal
It is well said that "The more you sweat on the field, the less you bleed in war". Failures are an inevitable part of complex systems. Accepting that failures happen, will help you design the system's reactions to specific failures.
This talks on best practices for building resilient, stable and predictable services:
preventing Cascading failures, Timeouts pattern, Retry pattern,Circuit breakers
and many more techniques in microservices
CDNs improve content delivery over the internet by replicating popular content on servers located close to users. This allows users to retrieve content from nearby CDN nodes rather than distant origin servers, reducing latency. CDNs select the optimal server using policies like geographic proximity, load balancing, and performance monitoring. They redirect clients to CDN nodes using techniques like DNS responses and HTTP redirection. This improves the end user experience through faster delivery, lowers network congestion, and increases the scalability and fault tolerance of popular websites.
Building & Testing Scalable Rails Applicationsevilmike
This document discusses building scalable Rails applications. It covers using multiple Rails processes and servers to handle concurrent requests. It recommends optimizing database queries, caching, offloading long tasks, and serving static assets externally. It also provides tips for load testing including using realistic data and environments, considering location and caching effects, and paying attention to request headers.
Jerry Lewis, VP of an IBM practice, discusses website performance and scalability for eCommerce. He shares horror stories of performance issues causing major revenue losses and customer complaints. Website performance is important because slow sites hurt sales and customer experience. Common causes of bad performance include inefficient code, database issues, and third party integration problems. To achieve good performance, websites must be designed with performance in mind from the start, with strategies like caching, efficient database usage, and infrastructure tuning.
Speaker: Darlene Nerden, IBM
Overview: In this session will review the Maximo architecture and factors that influence performance. We will discuss some details for those factors regarding tuning for a performance impact. We will look at troubleshooting tools and Maximo settings to help identify and resolve a Maximo performance issue.
SQL Server ASYNC_NETWORK_IO Wait Type ExplainedConfio Software
When a SQL Server session waits on the async network io event, it may be encountering issues with the network or with aclient application not processing the data quickly enough. If the wait times for "async network io" are high, review the client application to see if large results sets are being sent to the client. If they are, work with the developers to understand if all the data is needed and reduce the size of result set if possible. Learn tips and techniques for decreasing decrease waits for async_network_io in this presentation.
Basic introduction to cache and how it works. It als discuss about cache types and implementation techniques. Its advantages and drawbacks. Caching frameworks and tools.
Resilience planning and how the empire strikes backBhakti Mehta
t is well said that "The more you sweat on the field, the less you bleed in war". Failures are an inevitable part of complex systems. Accepting that failures happen, will help you design the system's reactions to specific failures.
This talks on best practices for building resilient, stable and predictable services: preventing cascading failures, timeouts pattern, retry pattern,circuit breakers and other techniques which have been pervasively used at Blue Jeans Network. Join me in this talk which ensures that the show must go on in spite of random load, stress or other failures!
Boost the Performance of SharePoint Today!Brian Culver
Is your farm struggling to server your organization? How long is it taking between page requests? Where is your bottleneck in your farm? Is your SQL Server tuned properly? Worried about upgrading due to poor performance? We will look at various tools for analyzing and measuring performance of your farm. We will look at simple SharePoint and IIS configuration options to instantly improve performance. I will discuss advanced approaches for analyzing, measuring and implementing optimizations in your farm as well as Performance Improvements in SharePoint 2013.
Predicates allow filtering events based on:
- Event properties (fields)
- Session properties
- System properties
They are evaluated synchronously when the event fires. This allows filtering events and reducing overhead compared to capturing all events.
Common predicates:
- event_name = 'sql_statement_completed'
- database_id = 5
- cpu_time > 1000
Predicates give granular control over what events are captured.
Creating a Centralized Consumer Profile Management Service with WebSphere Dat...Prolifics
In this presentation will talk about how one of the world's leading Financial Institutions, leveraged WebSphere DataPower to provide a set of centralized consumer profile management services. This central service would be leveraged by internal and external applications, and would align with enterprise marketing capabilities. The solution included a complex security model which included the following products: Tivoli Directory Server, Tivoli Access Manager and Tivoli Federated Identity Manager. We will describe how to build complex orchestrations in WebSphere DataPower, and also go through some of the performance tuning options we implemented to achieve a high degree of efficiency.
AWS re:Invent 2016: Amazon CloudFront Flash Talks: Best Practices on Configur...Amazon Web Services
In this series of 15-minute technical flash talks you will learn directly from Amazon CloudFront engineers and their best practices on debugging caching issues, measuring performance using Real User Monitoring (RUM), and stopping malicious viewers using CloudFront and AWS WAF.
Work with hundred of hot terabytes in JVMsMalin Weiss
Third-party updates to the database can cause Hazelcast applications to work with data which is out-of-date.
By synchronizing with an underlying database using an SQL Reflector, the Hazelcast Maps will be “alive” and change whenever the underlying data changes. The solution can also automatically derive domain models directly from the database schemas, so that you can start using the solution very quickly and handle extreme volumes of data.
In order to obtain the best performance possible out of your AEP server, the core architecture provides methods to reuse job processes multiple times. This talk will cover how the mechanism functions, what performance improvements you might expect as well as what potential problems you might encounter, how to use pooling in protocols and applications, and how the administrator or package developers can configure and debug specialized job pools for their particular applications
This document discusses web performance optimization and provides tips to improve performance. It emphasizes that performance is important for user experience, search engine optimization, conversion rates, and costs. It outlines common causes of performance issues like round-trip times, payload sizes, browser rendering delays, and inefficient JavaScript. Specific recommendations are given to optimize images, stylesheets, scripts, and browser rendering through techniques like compression, caching, deferred loading, and efficient coding practices. A variety of tools for measuring and improving performance are also listed.
Cloud Design Patterns - Hong Kong CodeaholicsTaswar Bhatti
Talk on Cloud Design Patterns at Hong Kong Codeaholics Meetup Group. Talk includes External Config Pattern, Cache Aside, Federated Identity Pattern, Valet Key Pattern, Gatekeeper Pattern, Circuit Breaker Pattern, Retry Pattern and the Strangler Pattern. These patterns depicts common problems in designing cloud-hosted applications and design patterns that offer guidance.
This document introduces WebStress, a tool for load testing and benchmarking web applications. It discusses recording scripts from live browser traffic, customizing scripts, defining tests with multiple scripts, running benchmark tests, and validating results. Exercises are provided to demonstrate recording a script, editing a script, creating a test, running a benchmark, and using debug mode. WebStress allows load testing for performance evaluations on applications like TrakCare without requiring expensive tools.
Is your farm struggling to server your organization? How long is it taking between page requests? Where is your bottleneck in your farm? Is your SQL Server tuned properly? Worried about upgrading due to poor performance? We will look at various tools for analyzing and measuring performance of your farm. We will look at simple SharePoint and IIS configuration options to instantly improve performance. I will discuss advanced approaches for analyzing, measuring and implementing optimizations in your farm.
Similar to Caching up is hard to do: Improving your Web Services' Performance (20)
The document discusses the Model-View-Controller (MVC) pattern and how Backbone.js implements it for single-page web applications. MVC originated in the 1970s and separates an application into three responsibilities - the model manages the data, view displays it, and controller handles user input. Backbone.js provides structure for web apps using MVC concepts with a RESTful API, event system, and routing. It embraces extensibility while remaining unopinionated.
AJAX, JSON, and client-side templates allow for asynchronous and partial page updates without reloading the entire web page. AJAX uses XMLHttpRequest and JavaScript to make asynchronous requests in the background. JSON is a lightweight data format that is easy for humans and machines to parse. Client-side templates separate data and layout so that only small amounts of data need to be transferred, improving page load times and reducing network traffic compared to traditional full-page reloads.
Manufacturers have hit limits for single-core processors due to physical constraints, so parallel processing using multiple smaller cores is now common. The .NET framework includes classes like Task Parallel Library (TPL) and Parallel LINQ (PLINQ) that make it easy to take advantage of multi-core systems while abstracting thread management. TPL allows executing code asynchronously using tasks, which can run in parallel and provide callbacks to handle completion and errors. PLINQ allows parallelizing LINQ queries.
Node.js is a JavaScript runtime built on Chrome's V8 engine that allows JavaScript to be run on the server-side. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, especially for real-time applications with heavy network use. While it shares a language with client-side JavaScript, Node.js is meant for server-side applications and not in the browser.
This document discusses Reactive Extensions (Rx), which provides interfaces and methods for implementing "pull-based" or observable systems. It describes the difference between pull-based and push-based models. Rx includes IObservable and IObserver interfaces for creating and observing asynchronous data streams. It also includes Observable and Observer classes that make it easier to create observables without defining classes. Rx integrates with LINQ to add query operators for observables.
The document provides an overview of SQL vs NoSQL databases. It discusses how RDBMS systems focus on ACID properties to ensure consistency but sacrifice availability and scalability. NoSQL systems embrace the CAP theorem, prioritizing availability and partition tolerance over consistency to better support distributed and cloud-scale architectures. The document outlines different NoSQL database models and how they are suited for high volume operations through an asynchronous and eventually consistent approach.
Git is a distributed version control system created by Linus Torvalds in 2005 as an alternative to BitKeeper. It allows developers to have a complete history of the source code on their local machine and supports a distributed workflow. Commits in Git link back to previous commits and contain references to file trees and parent commits. Git uses references and branching to efficiently track changes from multiple developers and integrate their work.
This document provides an introduction to the F# programming language. It discusses that F# was created by Microsoft Research in 2005 and is based on functional programming concepts from languages like ML and OCaml. It then gives examples of how F# uses immutable values, type inference, currying of functions, and anonymous functions to allow for powerful and flexible programming. The document aims to explain core F# concepts like functions, types, and immutability in an accessible way for beginners.
The document discusses different approaches to building web services:
- Remote Procedure Call (RPC) uses SOAP and WSDL but is complicated to implement.
- RESTful services use standard HTTP methods to interact with resources through clean URLs and return data in XML or JSON formats. REST services are easier to build and consume.
- REST focuses on stateless resources and uses HTTP verbs like GET, PUT, POST and DELETE to perform CRUD operations on resources accessed through URLs.
The document discusses how social gaming concepts can be applied to businesses. It provides Helen as an example of someone whose job as a World of Warcraft guild officer mirrors that of an HR manager. The document then discusses social gaming statistics and concepts like avatars, narrative context, feedback systems, reputation/ranks, competition with rules, and teamwork that could be applied to businesses. It concludes by suggesting businesses start by collaborating with gamers to create a game and adjust it frequently based on player feedback.
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...
Caching up is hard to do: Improving your Web Services' Performance
1. Caching Up is Hard To
Do
Chad McCallum
ASP.NET MVP
iQmetrix Software
www.rtigger.com - @ChadEmm
Improving the performance of your web
services
2. Here’s your API
Client makes a request Bounces around the
internet to web server
Web server runs
application, queries DB
Database returns data
from query
Application serializes
response and writes to
client
Bounces back through
internet to client
Client receives data
Client The Internet Web Server Database
3. An example request
Client The Internet Web Server Database
Client makes a request Bounces around the
internet to web server
Web server runs
application, queries DB
Database returns data
from query
Application serializes
response and writes to
client
Bounces back through
internet to client
Client receives data
270ms
598ms3474ms
4342ms
528kb
4. Start at the data
• Optimize the database server
• Logical Design – efficient queries, application-specific schema, constraints,
normalization
• Physical Design – indexes, table & system settings, partitions, denormalization
• Hardware – SSDs, RAM, CPU, Network
5. Between your App and the Data
• Reduce the complexity of your calls – get only the data you need
• Reduce the number of calls – return all the required data in one query
• Make calls async – perform multiple queries at the same time*
• The fastest query is the one you never make
• Cache the result of common queries in an application-level or shared cache
598ms 315ms
47%
Application Time
6. Caching Data
• Great for static or relatively unchanged data
• Product Catalogs
• Order History
• Not so great for volatile data
• Store Quantity
• Messages
• Comes with a memory price
• Shared Cache when working with a web farm
7. Inside your App
• Standard “MVC” Flow
• Request comes into Web Server over network connection
• Framework parses request URL and other variables to determine which
controller and method to execute, checking against routes
• Framework creates instance of controller class and passes copy of request
object to appropriate method
• Method executes, returning an object to be sent in response
• Framework serializes response object into preferred type as requested by
client
• Web server writes response back to client over network connection
8. Inside your App
• The most we can reasonably do is optimize our controller’s method
• “Reasonably” meaning not doing crazy things to the underlying framework
code / dependencies
• The fastest method is one you don’t execute
• Cache the serialized result of common API calls
598ms 296ms
51%
Application Time
9. Caching Responses
• Great for endpoints that don’t take parameters
• Get
• Not so great for endpoints that do take parameters
• Get By ID
• reports with date ranges
• Get with filters
• Cache all supported serialization formats
• Same cache concerns – memory usage, shared cache in farm setup
10. From Server to Client
• We can’t really change the topology of a client’s network connection
• We can send less data
• HTTP Compression
3474ms 1083ms
69%
528kb 129kb
76%
Response Size Response Time
11. HTTP Compression
• Trading response size for server CPU cycles
• Output can be cached (and often is) by web server to avoid re-
compressing the same thing
• Client requests compression using Accept-Encoding header
598ms 624ms
4%
Application Time
12. Paging
• Don’t send everything!
• Only returning 20 items
• Page objects using OData Queries in WebAPI
• Returning IEnumerable<T> will page in-memory
• Returning IQueryable<T> will (attempt to) page at the database layer
3474ms 7ms
99.8%
528kb 10kb
98.1%
Response Size Response Time
13. Conditional Headers
• Server can send either an ETag and/or Last-Modified header with
response
• ETag = identifier for a specific version of a resource
• Last-Modified = the last time this resource was modified
• Clients can include that data in subsequent requests
• If-None-Match: “etag value”
• If-Modified-Since: (http date)
• Server can respond with a simple “304 Not Modified” response
14. Conditional Headers
3474ms <1 ms
99.9%
528kb 0.3kb
99.9%
• Avoid database calls to validate requests
• Cache last modified times & etag values
• May have to modify client code to retain and send Last-Modified and
ETag values
• Most browsers will automatically include If-Modified-Since, but some do
not include If-None-Match
• Non-browser code (SDKs, WebClient, HttpClient)
Response Size Response Time
598ms 323ms
54%
Application Time
15. Client-Side Caching
• Most browsers have a local cache – tell your clients to use it!
• Expires header tells client how long it can reuse a response
• Expires: Thu, 03 Apr 2014 03:19:37 GMT
• Cache-Control: max-age=## (where ## is seconds) header does the
same, but applies to more than just the client cache…
• In either case it’s up to the client whether it uses the cache or not
• Most browsers cache aggressively
16. Intermediate Caching
• Cache-Control header specifies who can cache, what they can
cache, and how things can be cached
• Public / Private – whether a response can be reused for all requests, or is
specific to a certain user
• max-age – the longest a response can be cached in seconds (overrides Expires
header)
• must-revalidate – if the response expires, must revalidate it with the server
before using it again
• no-cache – must check with the server first before returning a cached
response
17. Client-Side Caching
• Great for static or relatively static data
• Static HTML, JS, CSS files, or read-only lists of data that rarely change
• Not so great for dynamic or mission-critical data
• Hard to force clients to get latest version of data when they don’t even talk to
the server
• If you have to update before Expires or Max-Age runs out, you’ve got a
problem
4342ms 100ms
97.7%
Response Time
18. Review
• Optimize your database for your application
• Cache on the server
• Common database calls
• Serialized results
• Send less data
• HTTP Compression
• Paging
• Conditional Headers / 304 Not Modified
• Cache on the client
• Expires and Cache-Control headers
- Before animations, show “standard” endpoint code- Note on “bounces around to web server” – that’s about 8 hops from my home network to our azure instance in East Asia
Mention the cool new in memory OLTP / tables and compiled stored procedures in SQL Server 2014
You can return multiple result sets in one query – it takes some manual tweaking of the EDMX file and/or extra code in Entity Framework, but it is possibleKeep in mind high traffic + async can result in database overloadAfter last point, show call to cached-db endpoint
Shared cache like memcached or a faster? database call (i.e. nosql, in memory table, etc)
Show applicationhost.config transform, make request with Accept-Encoding: gzip header