The document discusses performance testing and introduces the Gatling load testing tool. It provides an overview of why performance testing is difficult due to the need to simulate production environments. It then discusses Gatling's domain specific language (DSL) for defining load tests and scenarios, including features for HTTP requests, checks, looping, conditions, error handling, setup, feeders and reporting. Gatling allows defining and executing distributed load tests across multiple machines.
DSLing your System For Scalability Testing Using Gatling - Dublin Scala User ...Aman Kohli
The power of Gatling is the DSL it provides to allow writing meaningful and expressive tests. We provide an overview of the framework, a description of their development environment and goals, and present their test results.
Source code available https://github.com/lawlessc/random-response-time
This document provides an overview of Cassandra, including:
- Why Cassandra is used for big data applications handling large volumes of data.
- How Cassandra's distributed architecture provides high availability and horizontal scalability.
- Details of Cassandra's write path, including how writes are replicated across nodes and how consistency is ensured.
- Examples of modeling data in Cassandra, including choices for primary keys, clustering columns, and other techniques.
- Common use cases where Cassandra is applicable, such as sensor data, fraud detection, and personalization engines.
Paolo Alvarado Customer Support Engineer, Fastly at Altitude 2016
Customer Support Engineer Paolo Alvarado discusses various useful features of advanced Varnish Configuration Language (VCL).
Design & Performance - Steve Souders at Fastly Altitude 2015Fastly
Fastly Altitude - June 25, 2015. Chief SpeedCurver Steve Souders explains how design and web performance are more interconnected than ever before. Users want a fast website with a rich design, but sometimes the interplay between design and performance feels like a fixed sum game: one side's gain is the other side's loss. Design and performance are indeed connected, but it's more like the yin and yang. They aren't opposing forces, but instead complement each other. Bringing these processes together produces experiences that are rich and fast.
Video from the talk: http://fastly.us/Altitude2015_Design-Performance
Steve's bio: Steve Souders is a co-founder at SpeedCurve, where he develops web performance services. His book, High Performance Web Sites, explains his best practices for performance; it was #1 in Amazon's Computer and Internet bestsellers. His follow-up book, Even Faster Web Sites, provides performance tips for today's Web 2.0 applications. Steve is the creator of many performance tools and services including YSlow, the HTTP Archive, Cuzillion, Jdrop, SpriteMe, ControlJS, and Browserscope. He serves as co-chair of Velocity, the web performance and operations conference from O'Reilly, and is co-founder of the Firebug Working Group.
- The document provides information on using Ansible to manage network device configurations including Juniper devices. It discusses using modules like junos_get_config to backup configurations, templates to generate configurations, and junos_install_config to deploy them. It also covers using Ansible to manage users on Linux systems.
gRPC is a high performance, open-source universal RPC framework. A service definition can be simply created in Protocol Buffers. Libraries in a wide variety of languages are then automatically generating the interface objects based on the service definition. Also, gRPC is much faster as REST due to leveraging HTTP/2 and Protocol Buffers. This talk will first give an introduction into gRPC. Afterward, we will see gRPC in action by creating a Scala service and a Swift client in a live-coding session.
Building Distributed System with Celery on Docker SwarmWei Lin
This document discusses building distributed systems with Celery on Docker Swarm. It introduces Celery for asynchronous task queueing and message passing. Docker Swarm is used to deploy Celery worker containers across multiple hosts for parallel computing. Tasks can be routed to specific workers by queue or host name. This allows building distributed systems easily by sending tasks to worker containers without worrying about the underlying infrastructure.
The document discusses performance testing and introduces the Gatling load testing tool. It provides an overview of why performance testing is difficult due to the need to simulate production environments. It then discusses Gatling's domain specific language (DSL) for defining load tests and scenarios, including features for HTTP requests, checks, looping, conditions, error handling, setup, feeders and reporting. Gatling allows defining and executing distributed load tests across multiple machines.
DSLing your System For Scalability Testing Using Gatling - Dublin Scala User ...Aman Kohli
The power of Gatling is the DSL it provides to allow writing meaningful and expressive tests. We provide an overview of the framework, a description of their development environment and goals, and present their test results.
Source code available https://github.com/lawlessc/random-response-time
This document provides an overview of Cassandra, including:
- Why Cassandra is used for big data applications handling large volumes of data.
- How Cassandra's distributed architecture provides high availability and horizontal scalability.
- Details of Cassandra's write path, including how writes are replicated across nodes and how consistency is ensured.
- Examples of modeling data in Cassandra, including choices for primary keys, clustering columns, and other techniques.
- Common use cases where Cassandra is applicable, such as sensor data, fraud detection, and personalization engines.
Paolo Alvarado Customer Support Engineer, Fastly at Altitude 2016
Customer Support Engineer Paolo Alvarado discusses various useful features of advanced Varnish Configuration Language (VCL).
Design & Performance - Steve Souders at Fastly Altitude 2015Fastly
Fastly Altitude - June 25, 2015. Chief SpeedCurver Steve Souders explains how design and web performance are more interconnected than ever before. Users want a fast website with a rich design, but sometimes the interplay between design and performance feels like a fixed sum game: one side's gain is the other side's loss. Design and performance are indeed connected, but it's more like the yin and yang. They aren't opposing forces, but instead complement each other. Bringing these processes together produces experiences that are rich and fast.
Video from the talk: http://fastly.us/Altitude2015_Design-Performance
Steve's bio: Steve Souders is a co-founder at SpeedCurve, where he develops web performance services. His book, High Performance Web Sites, explains his best practices for performance; it was #1 in Amazon's Computer and Internet bestsellers. His follow-up book, Even Faster Web Sites, provides performance tips for today's Web 2.0 applications. Steve is the creator of many performance tools and services including YSlow, the HTTP Archive, Cuzillion, Jdrop, SpriteMe, ControlJS, and Browserscope. He serves as co-chair of Velocity, the web performance and operations conference from O'Reilly, and is co-founder of the Firebug Working Group.
- The document provides information on using Ansible to manage network device configurations including Juniper devices. It discusses using modules like junos_get_config to backup configurations, templates to generate configurations, and junos_install_config to deploy them. It also covers using Ansible to manage users on Linux systems.
gRPC is a high performance, open-source universal RPC framework. A service definition can be simply created in Protocol Buffers. Libraries in a wide variety of languages are then automatically generating the interface objects based on the service definition. Also, gRPC is much faster as REST due to leveraging HTTP/2 and Protocol Buffers. This talk will first give an introduction into gRPC. Afterward, we will see gRPC in action by creating a Scala service and a Swift client in a live-coding session.
Building Distributed System with Celery on Docker SwarmWei Lin
This document discusses building distributed systems with Celery on Docker Swarm. It introduces Celery for asynchronous task queueing and message passing. Docker Swarm is used to deploy Celery worker containers across multiple hosts for parallel computing. Tasks can be routed to specific workers by queue or host name. This allows building distributed systems easily by sending tasks to worker containers without worrying about the underlying infrastructure.
Apache MXNet Distributed Training Explained In Depth by Viacheslav Kovalevsky...Big Data Spain
Distributed training is a complex process that does more harm than good if it not setup correctly.
https://www.bigdataspain.org/2017/talk/apache-mxnet-distributed-training-explained-in-depth
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
VCL template abstraction model and automated deployments to FastlyFastly
Neeraj Mendiratta Sr. Director of Devops, A+E Networks at Fastly Altitude 2016
Hosting hundreds of websites and backend services for multiple environments at the Content Delivery Network level presented a challenge for us at A+E. We solved this problem by applying the DevOps concept of “Infrastructure as Code”. First, a VCL templating framework was created to support a multitude of services and environment agnostic configurations. Then we integrated our CI tool with GitHub and Fastly to make a scalable way of managing our many services. This walkthrough is based on our real-world experiences. We discuss: using the template framework; how to handle the workflow between development, QA, and production environments; and the API calls and integrations necessary for automating deployments to Fastly.
To Hire, or to train, that is the question (Percona Live 2014)Geoffrey Anderson
"We're hiring!"
How many times have you heard this phrase at a conference? Every database-driven company is hiring and that makes for pretty stiff competition when trying to get a new DBA. Instead of searching for the perfect database administrator from a conference or Linkedin, why not look internally at your organization for system administrators or engineers who may be an equally good fit given the right training.
In this talk, I'll explain how the DBAs at Box developed a knowledge-sharing culture around databases and disseminated important learnings to other members of the company. I'll also cover the mentorship process we established to train other members of our Operations team to become rock star DBAs and manage our MySQL and HBase infrastructure at Box.
"Swoole: double troubles in c", Alexandr VronskiyFwdays
Practices in using Swoole ecosystem & migration real production marketplace app to async approach. Which benefits we got and what problems happens on stack with PHP8, Postgresql, Redis, RebbitMQ, Doctrine, coroutines/fibers, concurrency HTTP Server.
This document summarizes an introduction to profiling presentation. It discusses using the cProfile module to generate profile data and analyze it using tools like pstats. It also discusses using the results to identify bottlenecks by looking at exclusive time functions or walking down the call graph from inclusive time functions. Common optimizations mentioned include removing unnecessary work, using more efficient algorithms, batching I/O operations, database and SQL tuning, caching, and reducing code complexity.
Tips on how to improve the performance of your custom modules for high volume...Odoo
The document discusses performance optimization for OpenERP deployments handling high volumes of transactions and data. It provides recommendations around hardware sizing, PostgreSQL and OpenERP architecture, monitoring tools, and analyzing PostgreSQL logs and statistics. Key recommendations include proper sizing based on load testing, optimizing PostgreSQL configuration and storage, monitoring response times and locks, and analyzing logs to identify performance bottlenecks like long-running queries or full table scans.
Elasticsearch (R)Evolution — You Know, for Search… by Philipp Krenn at Big Da...Big Data Spain
Elasticsearch is a distributed, RESTful search and analytics engine built on top of Apache Lucene. After the initial release in 2010 it has become the most widely used full-text search engine, but it is not stopping there. The revolution happened and now it is time for evolution. We dive into current improvements and new features — how to make a great product even better.
https://www.bigdataspain.org/2017/talk/elasticsearch-revolution-you-know-for-search
Big Data Spain 2017
16th - 17th November Kinépolis Madrid
This document discusses key metrics to monitor for Node.js applications, including event loop latency, garbage collection cycles and time, process memory usage, HTTP request and error rates, and correlating metrics across worker processes. It provides examples of metric thresholds and issues that could be detected, such as high garbage collection times indicating a problem or an event loop blocking issue leading to high latency.
This document discusses potential updates to the Web Server Gateway Interface (WSGI) specification and some of the challenges involved. It notes that WSGI 1.0 has limitations for asynchronous systems and middleware. The author explored ideas for WSGI 2.0 like making requests and responses objects and adding context managers for resource management, but concluded it may be too late since so much code relies on the current specification.
This document provides an introduction to Kibana4 and how to use its features. It discusses the major components of Kibana4 including Discover, Visualize, and Dashboard. It also covers visualization types like metrics, buckets, and aggregations. The document provides examples of using aggregations versus facets and describes settings, scripted fields, and plugins. It concludes by discussing potential future directions for Kibana.
Celery is a distributed task queue that allows long-running processes to be executed asynchronously outside of the main request-response cycle. It uses message brokers like RabbitMQ to distribute jobs to worker nodes for processing. This improves request performance and allows tasks to be distributed across multiple machines. Common use cases include asynchronous tasks like email sending, long database operations, image/video processing, and external API calls.
This document discusses Node.js architecture and how software lives in hardware. It notes that Node.js uses a single-threaded, event loop model to avoid context switching and blocking I/O. This allows high throughput for operations like HTTP requests but is not optimal for long-running computations in a single thread. The document also addresses issues like callback hell and scaling event emitters, providing solutions like using promises and external queue systems. It concludes by stating Node.js is best for I/O operations, not all problems, and event loop models have existed in other frameworks before Node.js.
Raymond Kuiper - Working the API like a Unix ProZabbix
Communicating with the Zabbix API can be quite cumbersome, especially if you don't have a background as a programmer. For a sysadmin, it would be very nice if one could just run some CLI commands to control Zabbix behavior.
Wouldn't it be wonderful if you could fetch a list of active triggers and parse it with grep or sed to find the specific triggers you are looking for? Or perhaps you need a list of historic values that you can parse in a custom script? How about a cronjob that downloads and emails all the graphs in the system matching a certain regex?
In this presentation Raymond Kuiper will talk about some of these possibilities and show you how he achieved these things in his Zabbix setup.
Zabbix Conference 2015
This document discusses optimizing JavaScript performance in Node.js. It covers benchmarking Node applications, tips for writing efficient JavaScript that avoids hidden classes and dictionary mode in the V8 engine, profiling Node to find hot spots, and how the V8 optimizing compiler works. The presenter emphasizes the importance of speed and provides resources for further optimizing Node applications.
Introduction to performance tuning perl web applicationsPerrin Harkins
This document provides an introduction to performance tuning Perl web applications. It discusses identifying performance bottlenecks, benchmarking tools like ab and httperf to measure performance, profiling tools like Devel::NYTProf to find where time is spent, common causes of slowness like inefficient database queries and lack of caching, and approaches for improvement like query optimization, caching, and infrastructure changes. The key messages are that performance issues are best identified through measurement and profiling, database queries are often the main culprit, and caching can help but adds complexity.
I will show how to use Go's database/sql package, with MySQL as an example. Although the documentation is good, it's dense. I'll discuss idiomatic database/sql code, and cover some topics that can save you time and frustration, and perhaps even prevent serious mistakes.
"Roles and Profiles" is now the ubiquitous design pattern to create your puppet code tree. In this talk we will discuss writing reusable and maintainable profiles. We ll start by introducing creating module structures and will move on to type hinting and setting appropriate defaults. Finally we ll discuss the importance and the enforcing of code style conventions that allows multiple teams or projects to inner-source profiless
This document discusses techniques for building scalable websites with Perl, including:
1) Caching at various levels (page, partial page, and database caching) to improve performance and reduce load on application servers.
2) Using job queuing and worker processes to distribute processing-intensive tasks asynchronously instead of blocking web requests.
3) Leveraging caching and queueing libraries like Cache::FastMmap, Memcached, and Spread::Queue to implement caching and job queueing in Perl applications.
"As an asynchronous event driven JavaScript runtime, Node is designed to build scalable network applications" così si presenta Node.js, piattaforma tecnologica che - grazie alla sua immediatezza e produttività - ha conquistato dapprima startup e piccole aziende, fino a ritagliarsi uno spazio importante in realtà come IBM, LinkedIn, Netflix e Yahoo. La stessa Microsoft ha riconosciuto le potenzialità della piattaforma, tanto da integrare Node.js in Visual Studio Code e nelle ultime release di Visual Studio, oltre a basarci alcuni dei propri servizi di Azure come "Mobile Services" e "Functions".
In questa sessione vedremo come implementare con Node.js alcuni scenari applicativi comuni nell’ambito dello sviluppo web, analizzando quando la sua adozione può portarci vantaggi nel nostro lavoro quotidiano. In conclusione, faremo una breve panoramica architetturale, descrivendo alcuni scenari di cooperazione tra .NET e Node.js nello stesso sistema.
Codice e demo: https://github.com/rucka/CommunityDays2016
Things like Infrastructure as Code, Service Discovery and Config Management can and have helped us to quickly build and rebuild infrastructure but we haven't nearly spend enough time to train our self to review, monitor and respond to outages. Does our platform degrade in a graceful way or what does a high cpu load really mean? What can we learn from level 1 outages to be able to run our platforms more reliably.
We all love infrastructure as code, we automate everything ™. However making sure all of our infrastructure assets are monitored effectively can be slow and resource intensive multi stage process. During this talk we will investigate how we can setup nomad cluster that can automatically scale our infrastructure both horizontally as vertically to be able to cope with increased demand by users/
This talk will focus on making sure we on configuring Nomad and its new autoscaler component to be able to make data driven decisions about scaling nomad jobs in or out to fit current customers usage.
Gatling is a project that can be used as a load testing tool for analyzing and measuring the performance of a variety of services, with a focus on web applications. It is Scala-based, high performance load and stress test tool.
Blast your app with Gatling! by Stephane LandelleZeroTurnaround
This document discusses load testing tools and introduces Gatling as an easy to use and high performance load testing tool. It notes issues with other tools like JMeter having poor performance with many threads and blocking I/O. Gatling uses an asynchronous actor model with non-blocking I/O to reach new performance limits. It has a Scala DSL for defining tests and supports features like CSV and JDBC feeders, error handling, and reporting plugins. A demo of Gatling is provided.
Apache MXNet Distributed Training Explained In Depth by Viacheslav Kovalevsky...Big Data Spain
Distributed training is a complex process that does more harm than good if it not setup correctly.
https://www.bigdataspain.org/2017/talk/apache-mxnet-distributed-training-explained-in-depth
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
VCL template abstraction model and automated deployments to FastlyFastly
Neeraj Mendiratta Sr. Director of Devops, A+E Networks at Fastly Altitude 2016
Hosting hundreds of websites and backend services for multiple environments at the Content Delivery Network level presented a challenge for us at A+E. We solved this problem by applying the DevOps concept of “Infrastructure as Code”. First, a VCL templating framework was created to support a multitude of services and environment agnostic configurations. Then we integrated our CI tool with GitHub and Fastly to make a scalable way of managing our many services. This walkthrough is based on our real-world experiences. We discuss: using the template framework; how to handle the workflow between development, QA, and production environments; and the API calls and integrations necessary for automating deployments to Fastly.
To Hire, or to train, that is the question (Percona Live 2014)Geoffrey Anderson
"We're hiring!"
How many times have you heard this phrase at a conference? Every database-driven company is hiring and that makes for pretty stiff competition when trying to get a new DBA. Instead of searching for the perfect database administrator from a conference or Linkedin, why not look internally at your organization for system administrators or engineers who may be an equally good fit given the right training.
In this talk, I'll explain how the DBAs at Box developed a knowledge-sharing culture around databases and disseminated important learnings to other members of the company. I'll also cover the mentorship process we established to train other members of our Operations team to become rock star DBAs and manage our MySQL and HBase infrastructure at Box.
"Swoole: double troubles in c", Alexandr VronskiyFwdays
Practices in using Swoole ecosystem & migration real production marketplace app to async approach. Which benefits we got and what problems happens on stack with PHP8, Postgresql, Redis, RebbitMQ, Doctrine, coroutines/fibers, concurrency HTTP Server.
This document summarizes an introduction to profiling presentation. It discusses using the cProfile module to generate profile data and analyze it using tools like pstats. It also discusses using the results to identify bottlenecks by looking at exclusive time functions or walking down the call graph from inclusive time functions. Common optimizations mentioned include removing unnecessary work, using more efficient algorithms, batching I/O operations, database and SQL tuning, caching, and reducing code complexity.
Tips on how to improve the performance of your custom modules for high volume...Odoo
The document discusses performance optimization for OpenERP deployments handling high volumes of transactions and data. It provides recommendations around hardware sizing, PostgreSQL and OpenERP architecture, monitoring tools, and analyzing PostgreSQL logs and statistics. Key recommendations include proper sizing based on load testing, optimizing PostgreSQL configuration and storage, monitoring response times and locks, and analyzing logs to identify performance bottlenecks like long-running queries or full table scans.
Elasticsearch (R)Evolution — You Know, for Search… by Philipp Krenn at Big Da...Big Data Spain
Elasticsearch is a distributed, RESTful search and analytics engine built on top of Apache Lucene. After the initial release in 2010 it has become the most widely used full-text search engine, but it is not stopping there. The revolution happened and now it is time for evolution. We dive into current improvements and new features — how to make a great product even better.
https://www.bigdataspain.org/2017/talk/elasticsearch-revolution-you-know-for-search
Big Data Spain 2017
16th - 17th November Kinépolis Madrid
This document discusses key metrics to monitor for Node.js applications, including event loop latency, garbage collection cycles and time, process memory usage, HTTP request and error rates, and correlating metrics across worker processes. It provides examples of metric thresholds and issues that could be detected, such as high garbage collection times indicating a problem or an event loop blocking issue leading to high latency.
This document discusses potential updates to the Web Server Gateway Interface (WSGI) specification and some of the challenges involved. It notes that WSGI 1.0 has limitations for asynchronous systems and middleware. The author explored ideas for WSGI 2.0 like making requests and responses objects and adding context managers for resource management, but concluded it may be too late since so much code relies on the current specification.
This document provides an introduction to Kibana4 and how to use its features. It discusses the major components of Kibana4 including Discover, Visualize, and Dashboard. It also covers visualization types like metrics, buckets, and aggregations. The document provides examples of using aggregations versus facets and describes settings, scripted fields, and plugins. It concludes by discussing potential future directions for Kibana.
Celery is a distributed task queue that allows long-running processes to be executed asynchronously outside of the main request-response cycle. It uses message brokers like RabbitMQ to distribute jobs to worker nodes for processing. This improves request performance and allows tasks to be distributed across multiple machines. Common use cases include asynchronous tasks like email sending, long database operations, image/video processing, and external API calls.
This document discusses Node.js architecture and how software lives in hardware. It notes that Node.js uses a single-threaded, event loop model to avoid context switching and blocking I/O. This allows high throughput for operations like HTTP requests but is not optimal for long-running computations in a single thread. The document also addresses issues like callback hell and scaling event emitters, providing solutions like using promises and external queue systems. It concludes by stating Node.js is best for I/O operations, not all problems, and event loop models have existed in other frameworks before Node.js.
Raymond Kuiper - Working the API like a Unix ProZabbix
Communicating with the Zabbix API can be quite cumbersome, especially if you don't have a background as a programmer. For a sysadmin, it would be very nice if one could just run some CLI commands to control Zabbix behavior.
Wouldn't it be wonderful if you could fetch a list of active triggers and parse it with grep or sed to find the specific triggers you are looking for? Or perhaps you need a list of historic values that you can parse in a custom script? How about a cronjob that downloads and emails all the graphs in the system matching a certain regex?
In this presentation Raymond Kuiper will talk about some of these possibilities and show you how he achieved these things in his Zabbix setup.
Zabbix Conference 2015
This document discusses optimizing JavaScript performance in Node.js. It covers benchmarking Node applications, tips for writing efficient JavaScript that avoids hidden classes and dictionary mode in the V8 engine, profiling Node to find hot spots, and how the V8 optimizing compiler works. The presenter emphasizes the importance of speed and provides resources for further optimizing Node applications.
Introduction to performance tuning perl web applicationsPerrin Harkins
This document provides an introduction to performance tuning Perl web applications. It discusses identifying performance bottlenecks, benchmarking tools like ab and httperf to measure performance, profiling tools like Devel::NYTProf to find where time is spent, common causes of slowness like inefficient database queries and lack of caching, and approaches for improvement like query optimization, caching, and infrastructure changes. The key messages are that performance issues are best identified through measurement and profiling, database queries are often the main culprit, and caching can help but adds complexity.
I will show how to use Go's database/sql package, with MySQL as an example. Although the documentation is good, it's dense. I'll discuss idiomatic database/sql code, and cover some topics that can save you time and frustration, and perhaps even prevent serious mistakes.
"Roles and Profiles" is now the ubiquitous design pattern to create your puppet code tree. In this talk we will discuss writing reusable and maintainable profiles. We ll start by introducing creating module structures and will move on to type hinting and setting appropriate defaults. Finally we ll discuss the importance and the enforcing of code style conventions that allows multiple teams or projects to inner-source profiless
This document discusses techniques for building scalable websites with Perl, including:
1) Caching at various levels (page, partial page, and database caching) to improve performance and reduce load on application servers.
2) Using job queuing and worker processes to distribute processing-intensive tasks asynchronously instead of blocking web requests.
3) Leveraging caching and queueing libraries like Cache::FastMmap, Memcached, and Spread::Queue to implement caching and job queueing in Perl applications.
"As an asynchronous event driven JavaScript runtime, Node is designed to build scalable network applications" così si presenta Node.js, piattaforma tecnologica che - grazie alla sua immediatezza e produttività - ha conquistato dapprima startup e piccole aziende, fino a ritagliarsi uno spazio importante in realtà come IBM, LinkedIn, Netflix e Yahoo. La stessa Microsoft ha riconosciuto le potenzialità della piattaforma, tanto da integrare Node.js in Visual Studio Code e nelle ultime release di Visual Studio, oltre a basarci alcuni dei propri servizi di Azure come "Mobile Services" e "Functions".
In questa sessione vedremo come implementare con Node.js alcuni scenari applicativi comuni nell’ambito dello sviluppo web, analizzando quando la sua adozione può portarci vantaggi nel nostro lavoro quotidiano. In conclusione, faremo una breve panoramica architetturale, descrivendo alcuni scenari di cooperazione tra .NET e Node.js nello stesso sistema.
Codice e demo: https://github.com/rucka/CommunityDays2016
Things like Infrastructure as Code, Service Discovery and Config Management can and have helped us to quickly build and rebuild infrastructure but we haven't nearly spend enough time to train our self to review, monitor and respond to outages. Does our platform degrade in a graceful way or what does a high cpu load really mean? What can we learn from level 1 outages to be able to run our platforms more reliably.
We all love infrastructure as code, we automate everything ™. However making sure all of our infrastructure assets are monitored effectively can be slow and resource intensive multi stage process. During this talk we will investigate how we can setup nomad cluster that can automatically scale our infrastructure both horizontally as vertically to be able to cope with increased demand by users/
This talk will focus on making sure we on configuring Nomad and its new autoscaler component to be able to make data driven decisions about scaling nomad jobs in or out to fit current customers usage.
Gatling is a project that can be used as a load testing tool for analyzing and measuring the performance of a variety of services, with a focus on web applications. It is Scala-based, high performance load and stress test tool.
Blast your app with Gatling! by Stephane LandelleZeroTurnaround
This document discusses load testing tools and introduces Gatling as an easy to use and high performance load testing tool. It notes issues with other tools like JMeter having poor performance with many threads and blocking I/O. Gatling uses an asynchronous actor model with non-blocking I/O to reach new performance limits. It has a Scala DSL for defining tests and supports features like CSV and JDBC feeders, error handling, and reporting plugins. A demo of Gatling is provided.
This document discusses performance testing using Gatling. Gatling is introduced as a tool for performance testing systems to determine how fast and stable they are under different loads. The document promises to provide a live demo of Gatling's capabilities for performance testing code.
Démo Gatling au Performance User Group de Casablanca - 25 sept 2014Benoît de CHATEAUVIEUX
En 2008, la lenteur d'une application était ressentie au bout de 4 secondes, elle l'est au bout de 3 secondes en 2014.
La performance des applications web est devenue cruciale: la génération Y est beaucoup moins patiente (elle n'a pas connue le modèle 56k !) et switch très facilement.
Les impacts business de la performance des applications web sont donc forts: baisse de CA, perte de clients, etc.
Au cours de cette session du Performance User Group de Casablanca, j'ai présenté Gatling, un outils de test de charge Open-Source, simple, hautement scalable et intégrable dans une démarche de tests de performance en continue.
This document provides an overview of Gatling, an open source load testing tool. It can record scenarios from browser interactions similar to Selenium and run multiple scenarios simultaneously. The document discusses Gatling's scripting domain specific language (DSL) for defining scenarios, advanced features like loading data from files, and how to get started using Gatling by downloading, creating a simulation, and viewing reports. It also briefly mentions Gatling's internal architecture using Scala, Akka, and Netty.
JIRA Performance Testing in Pictures - Edward Bukoski Michael MarchAtlassian
This document summarizes a presentation about performance testing tools used at JP Morgan Chase for their JIRA instance over several years. It discusses the tools they used - Load Runner, JMeter, nGrinder, and more recently Gatling with InfluxDB and Grafana. It provides a scorecard comparing the tools and demonstrations of using Gatling and viewing results in Grafana. Key takeaways are to pick the right tools for the job, know your constraints and goals, and reach out for help from Atlassian and partners.
The document discusses the importance of stability in pharmaceutical compounding and outlines factors that can affect stability. It defines stability as a product retaining its properties and characteristics within specified limits throughout its shelf life. There are five main types of stability: chemical, physical, microbiological, therapeutic, and toxicological. Factors like temperature, light, humidity, ingredients, dosage form, pH, and solvent composition can influence stability. Pharmacists must store products under proper conditions and expiration dates to ensure stability and prevent issues.
This document provides an overview of the ICH Q1A(R2) guideline for stability testing of new drug substances and products. The guideline defines the stability data package required for drug registration in major regions. It addresses testing timelines and conditions for long term, intermediate, and accelerated studies on at least three batches of drug substance and product. The goal is to establish a re-test period or shelf life and recommended storage conditions. Specifications must cover attributes susceptible to change that could impact quality, safety or efficacy. The guideline provides detailed recommendations for testing frequency, storage conditions, and evaluation of results.
Seminor on accelerated stability testing of dosage forms sahilsahilhusen
This document discusses stability testing and shelf life prediction of pharmaceutical products. It defines stability as a product remaining within specifications over its shelf life. Stability testing establishes a shelf life and optimal storage conditions. Types of stability studies discussed are long term, intermediate, and accelerated testing under various temperature and humidity conditions. The Arrhenius equation is used to predict shelf life from accelerated data by relating reaction rate to temperature. Packaging selection considers permeability. Accelerated tests for emulsions and suspensions are also summarized.
The document discusses ICH stability guidelines for pharmaceutical products. It provides an overview of key ICH guidelines including Q1A(R2) on stability testing of new drug substances and products and Q1B on photo stability testing. Q1A(R2) outlines the core stability data package required, including testing conditions, number of batches, and stability commitments. It also defines criteria for significant changes. Q1B covers photo stability testing conditions and study design. The guidelines aim to provide stability information for marketing applications and ensure quality, safety and efficacy over the shelf life of pharmaceutical products.
The document contains details about a student named Srikanth Bandi enrolled in the Pharmaceutics department. It discusses accelerated stability testing, which involves exposing pharmaceutical products to elevated temperatures to simulate long-term shelf conditions over a shorter time period. The objectives and guidelines from the ICH are outlined, including storage conditions, sampling times, and test parameters. The document also describes the equipment used and process for conducting accelerated stability studies.
Stability testing and shelf life estimationManish sharma
Drug stability refers to the extent to which a pharmaceutical product retains its quality attributes, such as concentration of active ingredients, over time. Stability testing is necessary to determine a drug's shelf life and recommended storage conditions. It involves evaluating a drug's chemical, physical, and microbial properties under different temperatures and humidity levels over time. The Arrhenius equation can be used to predict a drug's stability at normal temperatures based on its degradation rates observed during accelerated stability testing at elevated temperatures. International guidelines provide recommendations for long-term and accelerated stability study protocols and minimum data requirements for drug substances and products to ensure quality, safety and efficacy over a product's shelf life.
This document discusses drug stability and factors that affect it. It defines drug stability as a drug product remaining within established specifications for identity, strength, quality and purity. Factors like temperature, humidity, light and microbial contamination can cause drug degradation through chemical, physical and biological processes like hydrolysis, oxidation and photolysis. The document outlines various packaging materials and how they can impact stability. It also describes different types of stability studies conducted, including long-term real-time testing and accelerated methods like elevated temperature to evaluate products' shelf lives under normal conditions.
The document discusses guidelines for stability testing from the International Conference on Harmonisation (ICH). It provides an overview of several ICH guidelines related to stability testing of drug substances and products, including guidelines on photostability testing, new dosage forms, bracketing and matrixing designs, and evaluation of stability data. It also summarizes key aspects of conducting stability studies such as selecting representative batches, appropriate container closure systems, testing frequency and storage conditions, and evaluation of results. Stress testing is discussed as a way to validate analytical methods and identify potential degradants.
Accelerated stability testing exposes pharmaceutical products to elevated temperatures and humidity to rapidly determine their shelf life. Samples are stored at conditions like 40°C/75%RH and tested over time. The Arrhenius equation relates reaction rate constants at different temperatures, allowing prediction of shelf life at normal storage conditions from accelerated data. Limitations include reactions not dependent on temperature alone and products losing integrity at high stresses.
Tester unitairement une application javaAntoine Rey
Présente les différents types de tests automatisés, les objectifs des tests unitaires, les stratégies de mise en œuvre, les bonnes pratiques, les difficultés, ce qu'est un mock, différents outils (Unitils, Mockito, DbUnit, Spring Test) et des exemples de tests (DAO et contrôleurs Spring MVC), sans oublier le test de code legacy.
Performance tests with Gatling are difficult for three main reasons:
1) The test environment must closely simulate production in terms of hardware, software, and load.
2) Proper infrastructure for monitoring, logging, and isolating tests is required.
3) Performance intuition can be wrong, so statistics like percentiles must be used rather than averages.
The document discusses load testing and why it often fails. It recommends using a tool with the lowest barrier to entry, such as Blitz, which allows load testing on AWS with a web form or API. Blitz produces results but no reports, so the document shows how to modify Blitz's code to output JSON results for reporting purposes. It encourages integrating load testing into existing development workflows rather than treating it separately after deployment.
We all have tasks from time to time for bulk-loading external data into MySQL. What's the best way of doing this? That's the task I faced recently when I was asked to help benchmark a multi-terrabyte database. We had to find the most efficient method to reload test data repeatedly without taking days to do it each time. In my presentation, I'll show you several alternative methods for bulk data loading, and describe the practical steps to use them efficiently. I'll cover SQL scripts, the mysqlimport tool, MySQL Workbench import, the CSV storage engine, and the Memcached API. I'll also give MySQL tuning tips for data loading, and how to use multi-threaded clients.
Distributed Load Testing with k6 - DevOps BarcelonaThijs Feryn
Slides for My "Distributed Load Testing with k6" presentation at DevOps Barcelona 2023.
In this presentation I introduce k6, the open source load testing tool by Grafana Labs.
I show how to write tests in Javascript, how to run the tests using the CLI tools and how to measure the right results and configure the right checks, metrics & thresholds.
In terms of distributed testing, I also talk about the built-in API that allows you to remotely trigger tests from different machines and how to centralize and visualize real-time metrics using Prometheus and Grafana.
See https://feryn.eu/speaking/distributed-load-testing-k6-devops-barcelona-23/ for more information.
Distributed load testing with K6 - NDC London 2024Thijs Feryn
This presentation at NDC London 2024 is about the K6 load testing tool. It features the basics, but also explains how you can use it to perform distributed load testing and store test results in Prometheus.
See https://feryn.eu/presentations/distributed-load-testing-k6-ndc-london-2024 for more information.
Slides for my k6 load testing presentation at Confoo 2023 in Montreal Canada.
See https://feryn.eu/speaking/distributed-load-testing-k6-confoo23/ for more information.
Rich and Snappy Apps (No Scaling Required)Thomas Fuchs
Presentation by Amy Hoy and Thomas Fuchs about front-end web application performance at Kings of Code, Amsterdam, June 2009.
Main topics are loading-time performance, JavaScript tuning and progress indication.
Note that without the audio this is probably not very useful and it's mainly intended for attendees of the talk.
The document discusses performance testing and summarizes that:
1. Performance tests should closely simulate production environments including hardware, software, load, and isolation.
2. Extensive monitoring, logging, and profiling data should be collected to identify bottlenecks based on data rather than intuition.
3. Performance testing can be misleading without sufficient data due to issues like coordinated omission, so tools like Gatling and WRK2 that avoid this problem are recommended.
The document describes the author's experience deploying and configuring Varnish caching at Opera over many years. Some key points discussed include:
- Initial deployment in 2009 caching static assets for My Opera, which grew to serve 15% of requests
- Troubleshooting issues like session mixing and unauthorized access
- Implementing caching for dynamic pages like the front page while respecting cookies and languages
- Decentralizing caching to multiple data centers for lower latency globally
- Generating and caching thumbnails on-the-fly to handle frequent design changes
- Developing a more generic "shields-up" configuration to cache unpopular content securely
- Ongoing work caching APIs and content on other
Two years ago Rackspace had a problem: how do we backup 20K network devices, in 8 datacenters, across 3 continents, with less than a 1% failure rate -- every single day? Many solutions were tried and found wanting: a pure Perl solution, a vendor solution and then one in Ruby, none worked well enough. They not fast enough or they were not reliable enough, or they were not transparent enough when things went wrong. Now we all love Ruby but good Rubyists know that it is not always the best tool for the job. After re-examining the problem we decided to rewrite the application in a mixture of Erlang and Ruby. By exploiting the strengths of both -- Erlang's astonishing support for parallelism and Ruby's strengths in web development -- the problem was solved. In this talk we'll get down and dirty with the details: the problems we faced and how we solved them. We'll cover the application architecture, how Ruby and Erlang work together, and the Erlang approach to asynchronous operations (hint: it does not involve callbacks). So come on by and find out how you can get these two great languages to work together.
Matteo Collina | Take your HTTP server to Ludicrous Speed | Codmeotion Madrid...Codemotion
In my journey through nodeland, I always wonder about the cost of my abstractions. Express, Hapi, Restify, or just plain Node.js core? require(‘http’) can reach 30k requests/sec, Express 22k, and Hapi 21k. I started a journey to write an HTTP framework with extremely low overhead, and Fastify was born. With its ability to reach an astonishing 37k requests/sec, Fastify can halve your cloud server bill. How can Fastify be so.. fast? We will discover all the not-so-secret techniques that were used to optimize it. In Fastify we reach a point where even allocating a callback is too slow: Ludicrous
The document describes configuring and managing a DNS zone hosted on AWS Route 53. It includes steps to:
1) Create a hosted zone for the domain "cloudgirl.baking.jp" on Route 53 and view the nameservers assigned;
2) Add a record set for the subdomain "www" with an A record pointing to an IP address; and
3) Delete the record set and eventually the hosted zone.
fog or: How I Learned to Stop Worrying and Love the CloudWesley Beary
Learn how to easily get started on cloud computing with fog. If you can control your infrastructure choices, you’ll make better choices in development and get what you need in production. You'll get an overview of fog and concrete examples to give you a head start on your provisioning workflow.
fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)Wesley Beary
The document discusses how to use the Fog library to interact with cloud services. Fog allows interacting with multiple cloud providers like AWS, Rackspace, etc in a portable way. It provides models, collections, and methods to manage resources like servers, storage, DNS etc. in an abstracted way across providers. The document demonstrates how to boot a server, install SSH keys, run commands via SSH, and ping a target using the Fog and Ruby APIs in just a few lines of code.
Play Framework and Ruby on Rails are web application frameworks that help developers build web applications. Both frameworks provide tools and libraries for common tasks like routing, database access, templates and more. Some key similarities include using MVC patterns, supporting SQL/NoSQL databases via libraries, and including tools for unit testing and deployment. Some differences are Play uses Scala and Java while Rails uses Ruby, and they have different project structures and ways of handling assets, templates and dependencies. Both aim to help developers build web applications faster with their features and ecosystem of supporting libraries.
The document discusses various topics related to optimizing performance for PostgreSQL including:
- Indexes and how to use EXPLAIN and EXPLAIN ANALYZE to analyze query performance. Conditional, functional and concurrent indexes are covered.
- Connection pooling options for Django like django-postgrespool to improve connection management.
- Replication options such as Slony, Bucardo, pgpool, WAL-E and Barman for high availability.
- Backup strategies including logical backups with pg_dump and physical backups using base backups. When each approach is best to use.
Dask is a task scheduler that seamlessly parallelizes Python functions across threads, processes, or cluster nodes. It also offers a DataFrame class (similar to Pandas) that can handle data sets larger than the available memory.
So you’ve developed an app in MongoDB Stitch? Now what? What is day-to-day use of MongoDB Stitch really like? We’ll talk topics like multi-environment deployment (dev → test → production); logging; testing and timing; and how to best make MongoDB and Stitch work for your application.
QA Fest 2019. Антон Молдован. Load testing which you always wantedQAFest
Десь рік тому ми почали працювати над новою версією наших продуктів. Саме тоді ми почали випробовувати різні технології, архітектури, підходи, а головне це — міряти performance, бо без цього в highload проектах взагалі не вижити.
При проектуванні “любої” системи нам потрібно знати її ліміти:
- скільки паралельних запитів може обробити мікросервіс за допустиму latency?
- як багато запитів може витримати база даних, яку ми використовуємо?
- як довго потрібно чекати на Push повідомлення?
- як довго триває розподілена транзакція та між якими сервісами відбувається найбільша затримка?
І таких питань у нас було безліч. В процесі тестування ми використовували різний tooling: JMeter, ab, Gatling, але всі вони надавали дуже лімітовані можливості. Нам не вдавалося нормально покрити push flow (WebSockets/SSE), різні бази даних, було складно імітувати різний workloads (update/read).
На цій зустрічі я розповім про наш досвід застосування load testing:
- що використовуємо для тестування баз даних, мікросервісів;
- як готуємо Pull/Push тести та як адаптуємо тести під різні протоколи (HTTP/WebSockets/SSE);
- які виникають проблеми з замірами latency.
Моя доповідь дуже практична, тому після неї ви зможете з легкістю почати застосовувати load testing у себе на проекті.
Anton Moldovan "Load testing which you always wanted"Fwdays
Десь рік тому ми почали працювати над новою версією наших продуктів. Саме тоді ми почали випробовувати різні технології, архітектури, підходи, а головне це — міряти performance, бо без цього в highload проектах взагалі не вижити.
При проектуванні “любої” системи нам потрібно знати її ліміти:
скільки паралельних запитів може обробити мікросервіс за допустиму latency?
як багато запитів може витримати база даних, яку ми використовуємо?
як довго потрібно чекати на Push повідомлення?
як довго триває розподілена транзакція та між якими сервісами відбувається найбільша затримка?
І таких питань у нас було безліч. В процесі тестування ми використовували різний tooling: JMeter, ab, Gatling, але всі вони надавали дуже лімітовані можливості. Нам не вдавалося нормально покрити push flow (WebSockets/SSE), різні бази даних, було складно імітувати різний workloads (update/read).
На цій зустрічі я розповім про наш досвід застосування load testing:
що використовуємо для тестування баз даних, мікросервісів;
як готуємо Pull/Push тести та як адаптуємо тести під різні протоколи (HTTP/WebSockets/SSE);
які виникають проблеми з замірами latency.
Моя доповідь дуже практична, тому після неї ви зможете з легкістю почати застосовувати load testing у себе на проекті.
Similar to Performance and stability testing \w Gatling (20)
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
46. class Test extends Simulation {
val httpConf = http.baseURL("http://neo-database:7474")
val test = scenario("GetGraph")
.exec(
http("execute_query")
.post("/extension/query/execute")
.body(StringBody("MATCH (root)-[:HAS*]->(child)"))
.check(status.is(200))
.check(jsonPath("$.results").count.is(100))
)
setUp(
test.inject(
rampUsersPerSec(0) to (1000) during (60 seconds)
)
).protocols(httpConf)
}
58. • Incorrect server setup
• Unconfigured networking
• Low max open connection limit
59. Learned
- Performance testing should be done
- Test results should be interpreted
properly
- Test results should be verified in
different environments