JMeter performance and scalability in Moodle Montana Moot 2014moorejon
Using jMeter Moodle admins can help assess the capacity or potential capabilities of their Moodle site. With jMeter testing, admins can determine what kind of concurrency they should expect to be able to achieve with their current server configuration. This workshop would then tie into a one-hour session related to performance
Have you experienced a Moodle site failure during a critical time? Are you worried that your Moodle service won't be able to meet your needs at the busiest times? This session will cover a variety of methods to ensure optimal performance of Moodle under peak load. The session will address general resource guidelines for expected concurrency and help administrators determine the correct sizing of IT resources for an expected Moodle load. The session will also cover benchmarking techniques that can be used to measure the actual performance of your Moodle infrastructure.
This document discusses tools and best practices for measuring and improving Drupal performance. It introduces benchmarking and profiling tools like Apache Bench, Yslow, and XHProf. It then outlines easy optimizations like enabling caching, compressing files, and database configuration. More advanced techniques include opcode caching, reverse proxying, clustering, and offloading search. The presentation aims to provide a broad overview of performance topics and spur discussion.
Web Performance Part 3 "Server-side tips"Binary Studio
The presentation is devoted to server side tips on improving Web Performance. All 4 presentations will help you reduce latency, enrich optimization of javascript code, discover tricky parts when working with API browser, see best practices of networking and learn lots of other important and interesting things. Enjoy! =)
This document summarizes the Massive Storage Engine 2.0, which was built to address scaling issues with file- and memory-based backends in handling workloads with gigabytes of content. It features allocation that is fragmentation-proof and can scale to over 100 terabytes, with an LFU eviction approach. The architecture uses threads for reliable allocation across multiple segments with reduced locking. It also supports an optional persistent datastore by mirroring metadata to disk in an asynchronous manner with minimal impact to performance. Evaluation showed it handles larger files well and recovers quickly from crashes by reading the stored book of metadata.
HTTP caching can decrease latency, server response times, and costs by storing previously fetched web content in caches located on browsers and proxies. There are two main types of caches - browser caches that store content on the user's device and proxy caches that store content between clients and origin servers. Caching benefits users through improved performance and availability of content when offline. Cache policies can be set using headers to control whether content is cached, where, and for how long before revalidation is required.
This document discusses caching techniques for improving performance and protecting resources. It outlines goals of caching like performance and resource protection. Key considerations for caching include thundering herd issues and negative caching. The document then describes simple, blocking, and refreshing cache algorithms. It provides a demo of caching techniques using a GitHub repository and discusses the Caffeine caching library.
Charlie Reverte, VP of Engineering at AddThis, discusses lessons learned from processing large-scale web data. AddThis processes data from 14 million domains, including 100 billion monthly page views and 50,000 events per second. Reverte outlines challenges around distributed ID generation, counting unique values, joining distributed data, sampling large datasets, and deploying systems that invalidate over 1.4 billion browser caches. He advocates for loose coupling between systems using approaches like Kafka for asynchronous event logging. Reverte also discusses techniques for columnar compression, tunable quality of service, and open sourcing Hydra, AddThis' custom processing system optimized for real-time data.
JMeter performance and scalability in Moodle Montana Moot 2014moorejon
Using jMeter Moodle admins can help assess the capacity or potential capabilities of their Moodle site. With jMeter testing, admins can determine what kind of concurrency they should expect to be able to achieve with their current server configuration. This workshop would then tie into a one-hour session related to performance
Have you experienced a Moodle site failure during a critical time? Are you worried that your Moodle service won't be able to meet your needs at the busiest times? This session will cover a variety of methods to ensure optimal performance of Moodle under peak load. The session will address general resource guidelines for expected concurrency and help administrators determine the correct sizing of IT resources for an expected Moodle load. The session will also cover benchmarking techniques that can be used to measure the actual performance of your Moodle infrastructure.
This document discusses tools and best practices for measuring and improving Drupal performance. It introduces benchmarking and profiling tools like Apache Bench, Yslow, and XHProf. It then outlines easy optimizations like enabling caching, compressing files, and database configuration. More advanced techniques include opcode caching, reverse proxying, clustering, and offloading search. The presentation aims to provide a broad overview of performance topics and spur discussion.
Web Performance Part 3 "Server-side tips"Binary Studio
The presentation is devoted to server side tips on improving Web Performance. All 4 presentations will help you reduce latency, enrich optimization of javascript code, discover tricky parts when working with API browser, see best practices of networking and learn lots of other important and interesting things. Enjoy! =)
This document summarizes the Massive Storage Engine 2.0, which was built to address scaling issues with file- and memory-based backends in handling workloads with gigabytes of content. It features allocation that is fragmentation-proof and can scale to over 100 terabytes, with an LFU eviction approach. The architecture uses threads for reliable allocation across multiple segments with reduced locking. It also supports an optional persistent datastore by mirroring metadata to disk in an asynchronous manner with minimal impact to performance. Evaluation showed it handles larger files well and recovers quickly from crashes by reading the stored book of metadata.
HTTP caching can decrease latency, server response times, and costs by storing previously fetched web content in caches located on browsers and proxies. There are two main types of caches - browser caches that store content on the user's device and proxy caches that store content between clients and origin servers. Caching benefits users through improved performance and availability of content when offline. Cache policies can be set using headers to control whether content is cached, where, and for how long before revalidation is required.
This document discusses caching techniques for improving performance and protecting resources. It outlines goals of caching like performance and resource protection. Key considerations for caching include thundering herd issues and negative caching. The document then describes simple, blocking, and refreshing cache algorithms. It provides a demo of caching techniques using a GitHub repository and discusses the Caffeine caching library.
Charlie Reverte, VP of Engineering at AddThis, discusses lessons learned from processing large-scale web data. AddThis processes data from 14 million domains, including 100 billion monthly page views and 50,000 events per second. Reverte outlines challenges around distributed ID generation, counting unique values, joining distributed data, sampling large datasets, and deploying systems that invalidate over 1.4 billion browser caches. He advocates for loose coupling between systems using approaches like Kafka for asynchronous event logging. Reverte also discusses techniques for columnar compression, tunable quality of service, and open sourcing Hydra, AddThis' custom processing system optimized for real-time data.
As service providers and primary code contributors in the Islandora Community, discoverygarden encounters customers who are ingesting, accessing, and storing high volumes of data. For example, a customer who had 150,000 objects in 2012 now has three million objects and expectations to grow to five million in the very short term. This is increasingly common.
As repositories grow in size they can encounter poor performance, particularly during large ingests and derivative generation. To accommodate growing repositories caching mechanisms, infrastructure changes, and code updates are necessary.
The presentation will explore customer case studies that demonstrate interim solutions and the extensive, ongoing research and development to find long-term solutions.
This document discusses Memcached, a distributed caching system that stores data and objects in memory for fast access. It can be used to cache database queries, API responses, and other computationally expensive operations. Memcached is an in-memory key-value store that uses a simple client-server architecture over TCP/IP or UDP. It allows storing and retrieving arbitrary data (strings, objects) indexed by keys.
Performance tests in Gatling outlines how to use the Gatling load testing tool to test the performance of web applications. Gatling allows writing simulations in Scala to generate load and analyze results. It uses an asynchronous actor model and handles a high number of concurrent users without blocking. The document discusses setting goals for performance tests, using Gatling's GUI to record and replay tests, analyzing results like response times and errors, and validating responses with assertions in the Scala simulations. Code examples demonstrate configuring requests, storing data in sessions, handling responses, and using loops and conditions to model different user behaviors in tests.
Concept of flexible open api server with node.js주용 오
The document proposes a flexible OpenAPI server architecture using Node.js to avoid the inflexibility of traditional servers that require rebooting when new logic is deployed. It suggests using JavaScript's ability to not require pre-compilation and Node.js' asynchronous processing to improve on synchronized traditional servers. The conceptual architecture would parse URI paths to link them to logic modules with names consisting of the OpenAPI name and method, like "testapi_GET.js". An index and server file would run examples to test the proposed flexible architecture.
Draft slide of Demystifying DHT in GlusterFSAnkit Raj
This document discusses distributed hash tables (DHT) in GlusterFS. It introduces key terminology like bricks, volumes, and nodes. It explains that GlusterFS uses a distributed hash model to store and access files across multiple servers, organizing and displaying files as if they were stored locally. This allows for centralized storage and easier distribution of documents to multiple clients without using the clients' local storage resources. The document then outlines how DHT solves problems and lists some common file operations like mkdir, create, lookup, and read that DHT facilitates. It also addresses managing scalability through operations like expanding volumes, rebalancing, and replacing bricks.
This document introduces CouchDB, an open-source document-oriented NoSQL database that uses a RESTful API. It is schema-less and stores data in JSON format. Documents can be queried using user-defined JavaScript map/reduce functions. CouchDB supports multi-master replication and MVCC concurrency control. Examples are provided on installing CouchDB, creating databases and documents via REST calls, updating documents, and creating views. Major companies that use CouchDB are also listed.
This document provides an overview and comparison of PostgreSQL, MongoDB, and ElasticSearch as active data stores. It defines an active data store as data that can be queried, manipulated, and transformed within a service layer. For each database, it briefly describes its key features at a high level and provides instructions for setting up a Docker playground to experiment with it. The document concludes by emphasizing choosing the right tool for the job and offers help from ARC-TS for decisions.
Step-by-step process to scale up a LAMP stack application, using PHP7, Amazon Elastic Beanstalk and other free services. Covers many traps to be avoided when vertical and horizontal scaling.
This document discusses optimizing PHP and web server performance. It covers using opcode caches like APC to improve PHP performance. It also discusses web performance best practices recommended by Google and Yahoo, including using CDNs, browser caching, minimizing assets, and profiling tools. The document is presented by the CTO of a mobile ad network company that sees high traffic volumes on only two servers.
Every webpage element should be cached to improve performance. Tracking element changes and invalidating outdated caches is challenging due to potential race conditions and simultaneous requests. A caching subsystem is needed to warm caches, track element lifetimes and prioritize frequently used items while keeping some elements like user data reasonably fresh.
Stream or segment : what is the best way to access your events in Pulsar_NengStreamNative
Infinite event streams are the core data abstraction in Apache Pulsar. Pulsar provides two-level reading APIs for accessing events in Pulsar topics, one is pub/sub and the other one is segment readers. The pub/sub API provides a unified messaging API for accessing events in a streaming way. People can choose different subscription modes for consuming events. The segment API provides a way to access events directly from Apache BookKeeper and tiered storage, which is more suitable for batch-oriented workloads. You can combine both pub/sub API and segment API to create a unified data processing experience as well.
In the past year, we at StreamNative have been helping with many customers running Pulsar for different use cases from online queuing, event sourcing to stream and batch processing. We also worked on integrating Pulsar with different components in the big data ecosystem. In this talk, we will share our experiences and best practices of choosing the right API for accessing your event streams in Pulsar for different use cases.
This document discusses speeding up the ZingMe-NTVV2 application by writing a PHP extension module. It introduces NTVV2, which has high traffic volumes. Writing a PHP extension can make complicated business functions run faster and use less memory compared to pure PHP. The document explains what a PHP extension is, its lifecycle, and how to set up the build environment. It recommends using SWIG, an interface compiler, to more easily connect C/C++ programs to PHP. SWIG allows defining types, wrapping classes/functions, and exposing functions to PHP. The document provides steps for using SWIG, including defining the module, generating code, creating a project, and compiling. Caching data in the PHP module
Webinar slides: Become a MongoDB DBA - What to Monitor (if you’re really a My...Severalnines
To operate MongoDB efficiently, you need to have insight into database performance. And with that in mind, we’ll dive into monitoring in this second webinar in the ‘Become a MongoDB DBA’ series. MongoDB offers many metrics through various status overviews and commands, but which ones really matter to you? How do you trend and alert on them? What is the meaning behind the metrics?
We’ll discuss the most important ones and describe them in ordinary plain MySQL DBA language. And we’ll have a look at the open source tools available for MongoDB monitoring and trending. Finally, we’ll show you how to leverage ClusterControl’s MongoDB metrics, dashboards, custom alerting and other features to track and optimize the performance of your system.
AGENDA
How does MongoDB monitoring compare to MySQL
Key MongoDB metrics to know about
Trending or alerting?
Available open source MongoDB monitoring tools
How to monitor MongoDB using ClusterControl
Demo
SPEAKER
Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.
- Clickhouse is being used by GoEuro to replace their Graphite backend for monitoring, as their previous Graphite setup required too much maintenance and tuning over time to handle their scale of 20 million visitors per month, 150 engineers, and 600+ releases per week.
- Key reasons for choosing Clickhouse include its built-in replication, sharding, linear scalability, and a GraphiteMergeTree table engine that provides 100% compatibility with the Graphite query language.
- Downsides of Clickhouse include initial dependency on Zookeeper for sharding/replication and slower read queries against sharded data, but it currently uses only 2 CPU cores and 2GB RAM to handle GoEuro's monitoring needs.
The document discusses how to build a system that can handle high access requests. It covers optimizing performance at the node level and scaling to multiple nodes. It then discusses various problems that can occur at different levels, from the client to the server to cross-server, and provides solutions for issues like caching, load balancing, and communication between servers. The overall goal is to understand where bottlenecks can occur and how to optimize each component to build a scalable system that can handle high traffic loads.
Caching in Drupal provides faster page loads and saves server resources. It uses cache backends like the database to store cached content in bins like page or block caches. Cache tags and contexts allow invalidating specific cached content when the underlying data changes. The cache API allows developers to specify cacheability metadata to effectively cache parts of pages, while ensuring cached content remains valid.
This document provides an overview of MongoDB, a NoSQL document database, comparing its features to SQL databases. It demonstrates how to setup MongoDB, import and query data, create indexes, and connect to MongoDB from C#. Key features covered include MongoDB's document model with dynamic schemas, indexing, aggregation capabilities, and scaling through replication and sharding.
Prometheus lightning talk (Devops Dublin March 2015)Brian Brazil
This document introduces Prometheus, an open-source monitoring system that allows instrumentation of everything including RPCs, interfaces, business logic, and logs. It provides client libraries that make instrumentation easy across many languages. The Prometheus server can handle over a million time series in one instance with no dependencies. It offers dashboards, expression queries, alerts and integrates with many systems. Time series have structured labels allowing flexible aggregation and complex math for rules and alerts. Prometheus costs less than $.001 per time series per month and is developed by SoundCloud, Boxever and Docker with an active community.
Web performance optimization - MercadoLibrePablo Moretti
The document provides techniques and tools for improving web performance. It discusses how reducing response times can directly impact revenues and user experience. It then covers various ways to optimize the frontend, including reducing time to first byte through DNS optimization and caching, using content delivery networks, HTTP compression, keeping connections alive, parallel downloads, and prefetching. It also discusses optimizing images, JavaScript loading, and introducing new formats like WebP. The overall document aims to educate on measuring and enhancing web performance.
Solving Multi-tenancy and G1GC in Apache HBase HBaseCon
This document discusses tuning Garbage First Garbage Collector (G1GC) for HBase clusters. Out of the box G1GC can hurt performance with long GC pauses. The key tuning parameters are heap size, initiating heap occupancy percentage, Eden size percentage, and HBase memory configuration caps. Tuning involves setting these parameters based on historical maximums for block cache size, memstore size, and static index size plus a buffer. Tuning Eden size also considers percentage time in GC and average young GC pause times. Adjustments may be needed over time based on cluster usage. Suboptimal client usage could also impact GC and requires fixing. Monitoring GC metrics helps evaluate tuning effectiveness.
The document discusses how to develop the prefrontal cortex and executive functions in children. It explains that the prefrontal cortex, which controls executive functions like planning and decision making, develops between ages 5-20. Repeated practice of executive functions through activities that provide feedback and incremental progress can strengthen the neural connections in this brain region. Examples given include video games, which motivate players through achievable challenges and feedback. The document recommends giving children opportunities to analyze new information, consider different perspectives, and make decisions in order to exercise their executive functions.
Massage therapy has a long history dating back thousands of years. It involves manipulating the soft tissues of the body through techniques like effleurage, petrissage, friction, tapotement, and vibration. The physiological effects of massage include relaxation, reduced muscle soreness, decreased anxiety, and increased blood flow. Massage can help reduce pain, swelling, and tension in the muscles and soft tissues.
As service providers and primary code contributors in the Islandora Community, discoverygarden encounters customers who are ingesting, accessing, and storing high volumes of data. For example, a customer who had 150,000 objects in 2012 now has three million objects and expectations to grow to five million in the very short term. This is increasingly common.
As repositories grow in size they can encounter poor performance, particularly during large ingests and derivative generation. To accommodate growing repositories caching mechanisms, infrastructure changes, and code updates are necessary.
The presentation will explore customer case studies that demonstrate interim solutions and the extensive, ongoing research and development to find long-term solutions.
This document discusses Memcached, a distributed caching system that stores data and objects in memory for fast access. It can be used to cache database queries, API responses, and other computationally expensive operations. Memcached is an in-memory key-value store that uses a simple client-server architecture over TCP/IP or UDP. It allows storing and retrieving arbitrary data (strings, objects) indexed by keys.
Performance tests in Gatling outlines how to use the Gatling load testing tool to test the performance of web applications. Gatling allows writing simulations in Scala to generate load and analyze results. It uses an asynchronous actor model and handles a high number of concurrent users without blocking. The document discusses setting goals for performance tests, using Gatling's GUI to record and replay tests, analyzing results like response times and errors, and validating responses with assertions in the Scala simulations. Code examples demonstrate configuring requests, storing data in sessions, handling responses, and using loops and conditions to model different user behaviors in tests.
Concept of flexible open api server with node.js주용 오
The document proposes a flexible OpenAPI server architecture using Node.js to avoid the inflexibility of traditional servers that require rebooting when new logic is deployed. It suggests using JavaScript's ability to not require pre-compilation and Node.js' asynchronous processing to improve on synchronized traditional servers. The conceptual architecture would parse URI paths to link them to logic modules with names consisting of the OpenAPI name and method, like "testapi_GET.js". An index and server file would run examples to test the proposed flexible architecture.
Draft slide of Demystifying DHT in GlusterFSAnkit Raj
This document discusses distributed hash tables (DHT) in GlusterFS. It introduces key terminology like bricks, volumes, and nodes. It explains that GlusterFS uses a distributed hash model to store and access files across multiple servers, organizing and displaying files as if they were stored locally. This allows for centralized storage and easier distribution of documents to multiple clients without using the clients' local storage resources. The document then outlines how DHT solves problems and lists some common file operations like mkdir, create, lookup, and read that DHT facilitates. It also addresses managing scalability through operations like expanding volumes, rebalancing, and replacing bricks.
This document introduces CouchDB, an open-source document-oriented NoSQL database that uses a RESTful API. It is schema-less and stores data in JSON format. Documents can be queried using user-defined JavaScript map/reduce functions. CouchDB supports multi-master replication and MVCC concurrency control. Examples are provided on installing CouchDB, creating databases and documents via REST calls, updating documents, and creating views. Major companies that use CouchDB are also listed.
This document provides an overview and comparison of PostgreSQL, MongoDB, and ElasticSearch as active data stores. It defines an active data store as data that can be queried, manipulated, and transformed within a service layer. For each database, it briefly describes its key features at a high level and provides instructions for setting up a Docker playground to experiment with it. The document concludes by emphasizing choosing the right tool for the job and offers help from ARC-TS for decisions.
Step-by-step process to scale up a LAMP stack application, using PHP7, Amazon Elastic Beanstalk and other free services. Covers many traps to be avoided when vertical and horizontal scaling.
This document discusses optimizing PHP and web server performance. It covers using opcode caches like APC to improve PHP performance. It also discusses web performance best practices recommended by Google and Yahoo, including using CDNs, browser caching, minimizing assets, and profiling tools. The document is presented by the CTO of a mobile ad network company that sees high traffic volumes on only two servers.
Every webpage element should be cached to improve performance. Tracking element changes and invalidating outdated caches is challenging due to potential race conditions and simultaneous requests. A caching subsystem is needed to warm caches, track element lifetimes and prioritize frequently used items while keeping some elements like user data reasonably fresh.
Stream or segment : what is the best way to access your events in Pulsar_NengStreamNative
Infinite event streams are the core data abstraction in Apache Pulsar. Pulsar provides two-level reading APIs for accessing events in Pulsar topics, one is pub/sub and the other one is segment readers. The pub/sub API provides a unified messaging API for accessing events in a streaming way. People can choose different subscription modes for consuming events. The segment API provides a way to access events directly from Apache BookKeeper and tiered storage, which is more suitable for batch-oriented workloads. You can combine both pub/sub API and segment API to create a unified data processing experience as well.
In the past year, we at StreamNative have been helping with many customers running Pulsar for different use cases from online queuing, event sourcing to stream and batch processing. We also worked on integrating Pulsar with different components in the big data ecosystem. In this talk, we will share our experiences and best practices of choosing the right API for accessing your event streams in Pulsar for different use cases.
This document discusses speeding up the ZingMe-NTVV2 application by writing a PHP extension module. It introduces NTVV2, which has high traffic volumes. Writing a PHP extension can make complicated business functions run faster and use less memory compared to pure PHP. The document explains what a PHP extension is, its lifecycle, and how to set up the build environment. It recommends using SWIG, an interface compiler, to more easily connect C/C++ programs to PHP. SWIG allows defining types, wrapping classes/functions, and exposing functions to PHP. The document provides steps for using SWIG, including defining the module, generating code, creating a project, and compiling. Caching data in the PHP module
Webinar slides: Become a MongoDB DBA - What to Monitor (if you’re really a My...Severalnines
To operate MongoDB efficiently, you need to have insight into database performance. And with that in mind, we’ll dive into monitoring in this second webinar in the ‘Become a MongoDB DBA’ series. MongoDB offers many metrics through various status overviews and commands, but which ones really matter to you? How do you trend and alert on them? What is the meaning behind the metrics?
We’ll discuss the most important ones and describe them in ordinary plain MySQL DBA language. And we’ll have a look at the open source tools available for MongoDB monitoring and trending. Finally, we’ll show you how to leverage ClusterControl’s MongoDB metrics, dashboards, custom alerting and other features to track and optimize the performance of your system.
AGENDA
How does MongoDB monitoring compare to MySQL
Key MongoDB metrics to know about
Trending or alerting?
Available open source MongoDB monitoring tools
How to monitor MongoDB using ClusterControl
Demo
SPEAKER
Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.
- Clickhouse is being used by GoEuro to replace their Graphite backend for monitoring, as their previous Graphite setup required too much maintenance and tuning over time to handle their scale of 20 million visitors per month, 150 engineers, and 600+ releases per week.
- Key reasons for choosing Clickhouse include its built-in replication, sharding, linear scalability, and a GraphiteMergeTree table engine that provides 100% compatibility with the Graphite query language.
- Downsides of Clickhouse include initial dependency on Zookeeper for sharding/replication and slower read queries against sharded data, but it currently uses only 2 CPU cores and 2GB RAM to handle GoEuro's monitoring needs.
The document discusses how to build a system that can handle high access requests. It covers optimizing performance at the node level and scaling to multiple nodes. It then discusses various problems that can occur at different levels, from the client to the server to cross-server, and provides solutions for issues like caching, load balancing, and communication between servers. The overall goal is to understand where bottlenecks can occur and how to optimize each component to build a scalable system that can handle high traffic loads.
Caching in Drupal provides faster page loads and saves server resources. It uses cache backends like the database to store cached content in bins like page or block caches. Cache tags and contexts allow invalidating specific cached content when the underlying data changes. The cache API allows developers to specify cacheability metadata to effectively cache parts of pages, while ensuring cached content remains valid.
This document provides an overview of MongoDB, a NoSQL document database, comparing its features to SQL databases. It demonstrates how to setup MongoDB, import and query data, create indexes, and connect to MongoDB from C#. Key features covered include MongoDB's document model with dynamic schemas, indexing, aggregation capabilities, and scaling through replication and sharding.
Prometheus lightning talk (Devops Dublin March 2015)Brian Brazil
This document introduces Prometheus, an open-source monitoring system that allows instrumentation of everything including RPCs, interfaces, business logic, and logs. It provides client libraries that make instrumentation easy across many languages. The Prometheus server can handle over a million time series in one instance with no dependencies. It offers dashboards, expression queries, alerts and integrates with many systems. Time series have structured labels allowing flexible aggregation and complex math for rules and alerts. Prometheus costs less than $.001 per time series per month and is developed by SoundCloud, Boxever and Docker with an active community.
Web performance optimization - MercadoLibrePablo Moretti
The document provides techniques and tools for improving web performance. It discusses how reducing response times can directly impact revenues and user experience. It then covers various ways to optimize the frontend, including reducing time to first byte through DNS optimization and caching, using content delivery networks, HTTP compression, keeping connections alive, parallel downloads, and prefetching. It also discusses optimizing images, JavaScript loading, and introducing new formats like WebP. The overall document aims to educate on measuring and enhancing web performance.
Solving Multi-tenancy and G1GC in Apache HBase HBaseCon
This document discusses tuning Garbage First Garbage Collector (G1GC) for HBase clusters. Out of the box G1GC can hurt performance with long GC pauses. The key tuning parameters are heap size, initiating heap occupancy percentage, Eden size percentage, and HBase memory configuration caps. Tuning involves setting these parameters based on historical maximums for block cache size, memstore size, and static index size plus a buffer. Tuning Eden size also considers percentage time in GC and average young GC pause times. Adjustments may be needed over time based on cluster usage. Suboptimal client usage could also impact GC and requires fixing. Monitoring GC metrics helps evaluate tuning effectiveness.
The document discusses how to develop the prefrontal cortex and executive functions in children. It explains that the prefrontal cortex, which controls executive functions like planning and decision making, develops between ages 5-20. Repeated practice of executive functions through activities that provide feedback and incremental progress can strengthen the neural connections in this brain region. Examples given include video games, which motivate players through achievable challenges and feedback. The document recommends giving children opportunities to analyze new information, consider different perspectives, and make decisions in order to exercise their executive functions.
Massage therapy has a long history dating back thousands of years. It involves manipulating the soft tissues of the body through techniques like effleurage, petrissage, friction, tapotement, and vibration. The physiological effects of massage include relaxation, reduced muscle soreness, decreased anxiety, and increased blood flow. Massage can help reduce pain, swelling, and tension in the muscles and soft tissues.
At Therap-Eaze, we have treated countless patients using Lymphatic Drainage Massage. We are specialists in Sports Massage, Physiotherapy, and Lymphatic Drainage, Sport Injury Massage treatments in Dublin, Ireland
This document provides an overview of massage therapy and hydrotherapy. It discusses the history and techniques of massage therapy, as well as how massage impacts various body systems like the muscular, skeletal, nervous and cardiovascular systems. It also covers hydrotherapy modalities like hot/cold applications and whirlpools. Vescent dry hydrotherapy is introduced as a modality that combines the benefits of massage, heat and hydrotherapy through water movement without getting the body wet.
This is the second of 6 powerpoints presented at the Annual New Zealand massage conference in 2007 on the topic of Chi Nei Tsang - Internal Organs Chi Therapy.
This document provides an overview of Emotional Freedom Technique (EFT) training. EFT is an energy psychology technique for dealing with emotions, trauma, and pain. It involves tapping on energy meridian points while focusing on negative emotions or issues. The training covers the scientific principles behind EFT, how to apply the basic tapping procedure, and how to address a variety of psychological issues like pain, fears, and trauma using EFT.
This document provides information on various types of massages, including their techniques, benefits, and contraindications. It discusses Eastern techniques like anma, ayurvedic, and shiatsu massages as well as Western techniques like lymphatic drainage, classical, quick, and reflexology massages. Each type of massage has different techniques involving pressure, manipulation, and stretching to relax muscles, improve circulation, remove toxins, and promote overall health and wellness. Contraindications include illnesses, infections, cancer, and recent surgeries.
Massage therapy has been practiced for over 2000 years to relieve stress, reduce pain, and enhance healing. There are over 50 types of massage that address these benefits through manipulation of the soft tissues. Some popular types are Swedish massage, deep tissue massage, hot stone massage, and prenatal massage. To become a massage therapist in Illinois requires a 500-hour certification program, passing a national exam, and maintaining licensing through ongoing education. Massage therapists can work in settings like spas, chiropractic offices, or independently, with typical session fees of $35-60. While the work provides flexibility, it also requires building clientele and can be physically taxing.
This document discusses strategies for optimizing memory acquisition based on current brain research. It explains that learning begins with attention, and input not attended to cannot become memory. The brain's reticular activating system acts as an involuntary filter that prioritizes novelty, perceived threats, and curiosity over other sensory input. Teachers can draw and sustain student attention through cues that promote curiosity and the ability to make accurate predictions. When students can correctly predict outcomes, it increases dopamine release and the pleasure response in the brain, reinforcing memory networks. The document provides strategies for reducing stress and promoting a growth mindset in students, including setting meaningful goals and providing frequent, incremental feedback. It emphasizes the importance of pattern-matching and activating prior knowledge to successfully encode
The document provides an overview of best practices for Moodle administration based on a presentation. It includes tips on change management, performance tuning, backups, user management, issue tracking, security practices, and custom development. The presentation encourages automating processes, using change control boards, monitoring server performance, and testing custom modules and themes. It also demonstrates hands-on exercises for administrative tasks in Moodle.
This presentation builds on a 6 month student project about measuring the performance of a moodle installation, and suggestions for what can be done to improve the performance, without changing the code.
This presentation summarises our testing method, and our performance recommendations
Best practices in Moodle administration Monatana Moot 2014moorejon
Best Practices in Moodle Administration provides tips for optimizing Moodle performance and uptime. The presentation recommends establishing change management processes, using monitoring tools, automating user management, and following security best practices. It also demonstrates hands-on techniques for backups, debugging issues, and customizing themes.
Capacity Planning For Your Growing MongoDB ClusterMongoDB
This document discusses capacity planning for deploying MongoDB. It defines capacity planning as planning for requirements like availability, throughput, and responsiveness by determining necessary resources like CPU, memory, storage, and network capacity. It emphasizes starting capacity planning before launch to avoid downtime. Key aspects of capacity planning for MongoDB include estimating working memory set size, storage I/O needs based on data size and access patterns, using tools like IOStat and MongoDB Management Service for monitoring and automation, and conducting iterative testing and deployments. Failure occurs if planned resources cannot meet requirements.
This document discusses capacity planning for deploying MongoDB. It defines capacity planning as planning for requirements like availability, throughput, and responsiveness by determining necessary resources like CPU, memory, storage, and network capacity. It emphasizes starting capacity planning before launch to avoid downtime. Key aspects of capacity planning for MongoDB include estimating working memory set size, storage I/O needs based on data size and access patterns, using tools like IOStat and MongoDB Management Service for monitoring and automation, and conducting iterative testing and deployments. Failure occurs if planned resources cannot meet requirements.
As cloud adoption has grown more rapidly in the last decade , how DBA's a can add more value to system and bring in more scalability to the DB server. This talk was presented at Open Source India 2018 conference by Kabilesh and Manosh of Mydbops. They share a few experience and value addition made to customers during their consulting process.
MongoDB capacity planning involves determining hardware requirements and sizing to meet performance and availability expectations. Key aspects include measuring the working set, monitoring resource usage, and iteratively planning as requirements and data change over time. Resources like CPU, storage, memory and network need to be considered based on the application's throughput, responsiveness and availability needs.
#OSSPARIS19 - How to improve database observability - CHARLES JUDITH, CriteoParis Open Source Summit
The document discusses improving database observability. It recommends collecting metrics like utilization, saturation, errors, latency, traffic, and errors using methods like USE, RED, and the seven golden signals. These metrics can be collected via tools like Collectd and exported to dashboards. Logs of SQL queries and slow queries should also be collected and analyzed. The goal is to increase transparency and visibility of the database to help users and developers and improve monitoring and reliability. Future work includes enhancing SQL logging, using sys_schema, publishing SLAs, and open sourcing monitoring probes.
This document provides best practices for optimizing Blackboard Learn performance. It recommends deploying for performance from the start, optimizing platform components continuously through measurements, using scalable deployments like 64-bit architectures and virtualization, improving page responsiveness through techniques like gzip compression and image optimization, optimizing the web server, Java Virtual Machine, and database through configuration and tools. It emphasizes the importance of understanding resource utilization, wait events, execution plans, and statistics/histograms for database optimization.
The document discusses three important things for IT leaders to know about SQL Server: database performance and speed matter; backups and disaster recovery plans are not all equal; and high availability/disaster recovery (HA/DR) tools provide proactive disaster protection. It provides tips on optimizing database performance through query tuning instead of hardware upgrades. It explains the importance of backing up transaction logs and having comprehensive disaster recovery plans, including solutions like AlwaysOn availability groups. The document promotes the services of SQLWatchmen for database diagnostics, tuning, disaster planning and recovery support.
OSMC 2019 | How to improve database Observability by Charles JudithNETWAYS
Delivering a database service is not a simple job but to ensure that everything is working correctly your platform needs to be observable. In this talk, I’ll talk about how we make the MySQL/MariaDB databases observable. We’ll talk about the RED, USE methods, and the golden signals. You’ll discover how we dealt with the following questions “We think the database is slow”. This talk will allow you to make your databases discoverable with open source solutions.
his workshop is aimed at Moodle admins who already have done some Moodle admin before and want to understand the changes that Moodle 2 brings to the Admin, and how also how to help optimise their Moodle site.
Moodle 2 Admin workshop 2 (afternoon session)
This session will focus on performance related aspects of Moodle 2 including:
The hosting application layer (Web server, Database)
The different server options for hosting Moodle
Performance testing
Typical areas which affect performance
Performance tweaking
Deploying any software can be a challenge if you don't understand how resources are used or how to plan for the capacity of your systems. Whether you need to deploy or grow a single MongoDB instance, replica set, or tens of sharded clusters then you probably share the same challenges in trying to size that deployment.
Intro to XPages for Administrators (DanNotes, November 28, 2012)Per Henrik Lausten
This document introduces XPages for administrators. It discusses:
- What XPages are and examples of XPages applications
- The administrator's important role in the application lifecycle in helping developers and users
- Tips for maximizing performance such as hardware configuration, server settings, caching, and preloading applications
- Application development best practices including supported Dojo and OneUI versions
- Configuring and administering Domino Directory, Internet sites, and security settings
- Tools for troubleshooting, monitoring, and impressing developers like the Extension Library and demo app
The document discusses techniques for improving web performance, including reducing time to first byte, using content delivery networks and HTTP compression, caching resources, keeping connections alive and reducing request sizes. It also covers optimizing images, loading JavaScript asynchronously to avoid blocking, and prefetching content. The overall goal is to reduce page load times and improve user experience.
How to get the maximum performance from your AEP server. This will discuss ways to improve execution time of short running jobs and how to properly configure the server depending on the expected number of users as well as the average size and duration of individual jobs. Included will be examples of making use of job pooling, Database connection sharing, and parallel subprotocol tuning. Determining when to make use of cluster, grid, or load balanced configurations along with memory and CPU sizing guidelines will also be discussed.
Silverstripe at scale - design & architecture for silverstripe applicationsBrettTasker
Brett Tasker discusses architecture and performance considerations for scaling Silverstripe applications. Some key points include:
- Silverstripe applications can be scaled through load balancing and microservices architectures. PHP executions are single-threaded so adding more servers allows utilizing multiple CPU cores.
- Web servers like Apache and Nginx support both mod_php and FastCGI/FPM PHP handlers, with the latter being more performant due to running PHP in separate processes.
- Caching with APCu, OPcache, Redis and Memcached can improve performance but each have different characteristics regarding memory usage, garbage collection, and data expiration.
- Database, templates, and ORM queries also impact performance so optimizing
This document discusses using the resource manager to control parallelism and Auto DOP in the database. It introduces parallelism concepts, Auto DOP, and how the resource manager can be used to limit parallelism through consumer groups, directives, and queuing. Setting up the resource manager divides users into groups, assigns parallel limits, and prevents performance degradation by queuing queries rather than allowing parallelism to downgrade. The resource manager provides finer control over Auto DOP and makes parallelism usage and throughput more predictable.
Similar to Moodle performance testing presentation - Jonathon Moore (20)
Designing Active Learning in Moodle – a preview of the Learning Designer tools Eileen Kennedy, D. N. Dimakopoulos, Diana Laurillard
Presented at Moodlemoot Edinburgh 2014
www.moodlemoot.ie
The document describes enhancements made to the Moodle homepage interface to make it more course-focused for students. A new block was added to centralize key course information like the course description, recent forum posts from all modules, and tabs with modules, assignments, and tutor details. The goal is to emphasize the student's overall course rather than just a collection of individual modules. Other blocks on the homepage were chosen to complement this course-focused approach and target information to students, staff or faculty.
Broadening the scope of a Maths module for student Technology teachers Sue Milne, Sarah Honeychurch, Niall Barr
Presented at Moodlemoot Edinburgh 2014
www.moodlemoot.ie
A proposal for integrating Serious Games made with Unity3D into Moodle courses Frank Poschner, Dieter Wloka
Presented at Moodlemoot Edinburgh 2014
www.moodlemoot.ie
This document describes the assessment elements used in a Principles of Economics module, including weekly quizzes, two online tests, a case study, and tutorial participation. The quizzes contribute to the final grade if completed within a week of the material being presented, and also give students access to lecture notes and tutorial answers. The tests include multiple choice and true/false questions covering all chapters. The author has published papers arguing that this continuous assessment scheme using an online gradebook can help induce regular revisions in students' learning process.
Using the Moodle Quiz for Formative and Summative Assessment: Safe Exam Browser and Laptops for Assessments Projects Mike Wilson
Presented at Moodlemoot Edinburgh 2014
www.moodlemoot.ie
The document discusses proposed changes to the Moodle quiz editing page, including breaking questions into sections, replacing buttons with an add menu, allowing question dependencies, and adding drag and drop and flexible repagination functionality. Quiz authors could view more questions per page, drag and drop questions within and across sections, add dependencies, and flexibly repaginate. Students would benefit from questions organized into sections on the navigation block and quiz summary page, and could be prompted about dependencies and repeat questions in adaptive quizzes.
Many a Mickle Makes a Muckle: A multitude of Moodle mods to enhance the student learning experience Roger Emery, Daran Price
Presented at Moodlemoot Edinburgh 2014 www.moodlemoot.ie
The document discusses extending the capabilities of Moodle Books by adding active learning elements like questions and assessments. It proposes developing a Moodle Workbook module that would integrate question bank functionality to allow questions to be added directly within book chapters. This would provide a structured way for students to self-test their comprehension through questions embedded in the learning context. Teachers would be able to import, edit, review, grade and provide feedback on student question responses through a linked quiz available only to teachers. The document considers both developing a standalone Workbook plugin versus modifying Books to link to quizzes.
This presentation by OECD, OECD Secretariat, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Professor Alex Robson, Deputy Chair of Australia’s Productivity Commission, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdfBen Linders
Psychological safety in teams is important; team members must feel safe and able to communicate and collaborate effectively to deliver value. It’s also necessary to build long-lasting teams since things will happen and relationships will be strained.
But, how safe is a team? How can we determine if there are any factors that make the team unsafe or have an impact on the team’s culture?
In this mini-workshop, we’ll play games for psychological safety and team culture utilizing a deck of coaching cards, The Psychological Safety Cards. We will learn how to use gamification to gain a better understanding of what’s going on in teams. Individuals share what they have learned from working in teams, what has impacted the team’s safety and culture, and what has led to positive change.
Different game formats will be played in groups in parallel. Examples are an ice-breaker to get people talking about psychological safety, a constellation where people take positions about aspects of psychological safety in their team or organization, and collaborative card games where people work together to create an environment that fosters psychological safety.
This presentation by Yong Lim, Professor of Economic Law at Seoul National University School of Law, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by Nathaniel Lane, Associate Professor in Economics at Oxford University, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by Katharine Kemp, Associate Professor at the Faculty of Law & Justice at UNSW Sydney, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
This presentation by OECD, OECD Secretariat, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by Juraj Čorba, Chair of OECD Working Party on Artificial Intelligence Governance (AIGO), was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
The importance of sustainable and efficient computational practices in artificial intelligence (AI) and deep learning has become increasingly critical. This webinar focuses on the intersection of sustainability and AI, highlighting the significance of energy-efficient deep learning, innovative randomization techniques in neural networks, the potential of reservoir computing, and the cutting-edge realm of neuromorphic computing. This webinar aims to connect theoretical knowledge with practical applications and provide insights into how these innovative approaches can lead to more robust, efficient, and environmentally conscious AI systems.
Webinar Speaker: Prof. Claudio Gallicchio, Assistant Professor, University of Pisa
Claudio Gallicchio is an Assistant Professor at the Department of Computer Science of the University of Pisa, Italy. His research involves merging concepts from Deep Learning, Dynamical Systems, and Randomized Neural Systems, and he has co-authored over 100 scientific publications on the subject. He is the founder of the IEEE CIS Task Force on Reservoir Computing, and the co-founder and chair of the IEEE Task Force on Randomization-based Neural Networks and Learning Systems. He is an associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS).
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
This presentation by Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam University, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
2. Analysing Usage
●We size for the highest peak
●Describe the peak scenario
●How many users?
●How short of interval?
●What are they doing?
●How much growth is expected?
3. The mdl_log
●A wealth of information
●Many sites have > year of data
●Determine concurrency
●Determine % of activity types
●Visualize historical usage
4. Optimizing Performance
●Use a PHP accelerator
●Balance your memory budget
●Make InnoDB buffer pool same size as DB (see
mysqltuner.pl)
●Most sensitive to slow disks
–Sessions
–Moodle source
–Moodle database
5. Some General Sizing Guidelines
●~25-30 MB per apache client /w accelerator
●~200-250 logged in users per core
●~4-6 x more resources for a user's first minute
●~10 logged in users supportable per apache child
●DB generally 1/10 size of Moodledata folder
●5-10% of population is common peak for
concurrency
6. JMeter Benchmarking
●Simulates simultaneous user activity
●Gathers response and throughput results
●Not a browser can't test javascript
●Can use to measure +/- of changes
●Can use to estimate expected concurrency, but
harder
7. What to Use for Testing?
●Copy of production for test site
–+Harder for server to cache
–-More likely errors in results due to broken
content
–-Have to reset user passwords
●Synthetic test site
–+Can use known good content less false errors
–-Takes a lot of time and effort to prepare
–-Smaller DB = easier server caching
8. Testing Changes
–Single test user and course may work well
–More accurate the test rig less likely to miss
problem
–Run same test between changes
–If results degrade don't move forward with
change without careful review
9. Measuring Concurrent Capacity
●Test users, enrolment, and course population
should be on par with production site
●Test set mix should reflect activity % of production
site
●Calculate
–Simultaneous logged in user count
–Simultaneous logging in users count
–They are different
–Know what you need for each
10. Moodle 2.6 and JMeter
●New integration available
●Creates test plan with many activity types
●Provides comparison reports
●Only single user / single course test?