Rate Limits at Scale SANS AppSec Las Vegas.
Rate Limit Everything All the time using a quantized time system with Memcache or Redis. Use this protect resources or discover anomalies.
Processing TeraBytes of data every day and sleeping at nightLuciano Mammino
This is the story of how we built a highly available data pipeline that processes terabytes of network data every day, making it available to security researchers for assessment and threat hunting. Building this kind of stuff on AWS is not that complicated, but if you have to make it near real-time, fault tolerant and 24/7 available, well... that's another story. In this talk, we will tell you how we achieved this ambitious goal and how we missed a few good nights of sleep while trying to do that! Spoiler alert: contains AWS, serverless, elastic search, monitoring, alerting & more!
Processing TeraBytes of data every day and sleeping at nightLuciano Mammino
This is the story of how we built a highly available data pipeline that processes terabytes of network data every day, making it available to security researchers for assessment and threat hunting.
Building this kind of stuff in the cloud is not that complicated, but if you have to make it near real-time, fault tolerant and 24/7 available, well... that's another story.
In this talk, we will tell you how we achieved this ambitious goal and how we missed a few good nights of sleep while trying to do that!
Spoiler alert: contains AWS, serverless, elastic search, monitoring, alerting & more!
Important Factors that bring down the trading systems projectsNexSoftsys
The most important factor of any trading system is its reliability, for which the hire java developers and all projects coded in Java, instead of having to report crash reports and second guesses, ask your developers team to focus on the system.
Processing TeraBytes of data every day and sleeping at nightLuciano Mammino
This is the story of how we built a highly available data pipeline that processes terabytes of network data every day, making it available to security researchers for assessment and threat hunting. Building this kind of stuff on AWS is not that complicated, but if you have to make it near real-time, fault tolerant and 24/7 available, well... that's another story. In this talk, we will tell you how we achieved this ambitious goal and how we missed a few good nights of sleep while trying to do that! Spoiler alert: contains AWS, serverless, elastic search, monitoring, alerting & more!
Processing TeraBytes of data every day and sleeping at nightLuciano Mammino
This is the story of how we built a highly available data pipeline that processes terabytes of network data every day, making it available to security researchers for assessment and threat hunting.
Building this kind of stuff in the cloud is not that complicated, but if you have to make it near real-time, fault tolerant and 24/7 available, well... that's another story.
In this talk, we will tell you how we achieved this ambitious goal and how we missed a few good nights of sleep while trying to do that!
Spoiler alert: contains AWS, serverless, elastic search, monitoring, alerting & more!
Important Factors that bring down the trading systems projectsNexSoftsys
The most important factor of any trading system is its reliability, for which the hire java developers and all projects coded in Java, instead of having to report crash reports and second guesses, ask your developers team to focus on the system.
No C-QL (Or how I learned to stop worrying, and love eventual consistency) (N...Brian Brazil
Traditional relational databases focus on ACID, providing strong semantics that require careful synchronisation between actors that limit scalability. NoSQL Column Stores such as Cassandra, Riak and Dynamo offer another way, by eschewing strong consistency you can meet your application's needs while also increasing scalability and reliability. This talk will cover how and where to use eventual consistency.
Monitoring Big Data Systems - "The Simple Way"Demi Ben-Ari
Once you start working with distributed Big Data systems, you start discovering a whole bunch of problems you won’t find in monolithic systems.
All of a sudden to monitor all of the components becomes a big data problem itself.
In the talk we’ll mention all of the aspects that you should take in consideration when monitoring a distributed system once you’re using tools like:
Web Services, Apache Spark, Cassandra, MongoDB, Amazon Web Services.
Not only the tools, what should you monitor about the actual data that flows in the system?
And we’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
Demi Ben-Ari is a Co-Founder and CTO @ Panorays.
Demi has over 9 years of experience in building various systems both from the field of near real time applications and Big Data distributed systems.
Describing himself as a software development groupie, Interested in tackling cutting edge technologies.
Demi is also a co-founder of the “Big Things” Big Data community: http://somebigthings.com/big-things-intro/
Prometheus Design and Philosophy by Julius Volz at Docker Distributed System Summit
Prometheus - https://github.com/Prometheus
Liveblogging: http://canopy.mirage.io/Liveblog/MonitoringDDS2016
Provisioning and Capacity Planning Workshop (Dogpatch Labs, September 2015)Brian Brazil
If you’ve ever worried that you may have an outage someday due to your production servers not being able to handle increased user traffic, then this workshop will help put you at ease. Learn the foundations and how to apply it to your services.
Contact me at brian.brazil@robustperception.io if you'd like to learn more.
Next generation alerting and fault detection, SRECon Europe 2016Dieter Plaetinck
There is a common belief that in order to solve more [advanced] alerting cases and get more complete coverage, we need complex, often math-heavy solutions based on machine learning or stream processing.
This talk sets context and pro's/cons for such approaches, and provides anecdotal examples from the industry, nuancing the applicability of these methods.
We then explore how we can get dramatically better alerting, as well as make our lives a lot easier by optimizing workflow and machine-human interaction through an alerting IDE (exemplified by bosun), basic logic, basic math and metric metadata, even for solving complicated alerting problems such as detecting faults in seasonal timeseries data.
https://www.usenix.org/conference/srecon16europe/program/presentation/plaetinck
Systems Monitoring with Prometheus (Devops Ireland April 2015)Brian Brazil
Monitoring means many things to many people. This talk looks at Systems Monitoring, that is how to keep an eye on a given system and use this as part of overall management of a system. This talk will cover Why one monitors, What to monitor, How to monitor, the general design of a monitoring system and how Prometheus is a good fit for this in terms of instrumentation, consoles, alerts, general system health and sanity.
Prometheus is a next-generation monitoring system publicly announced earlier this year, developed by companies including SoundCloud, locals Boxever and Docker. Since launch there has been wide-spread interest, and many community contributions.
For more information see http://prometheus.io or http://www.boxever.com/tag/monitoring
Counting with Prometheus (CloudNativeCon+Kubecon Europe 2017)Brian Brazil
Counters are one of the two core metric types in Prometheus, allowing for tracking of request rates, error ratios and other key measurements. Learn why are they designed the way they are, how client libraries implement them and how rate() works.
If you'd like more information about Prometheus, contact us at prometheus@robustperception.io
August 2016 HUG: Open Source Big Data Ingest with StreamSets Data Collector Yahoo Developer Network
Big data tools such as Hadoop and Spark allow you to process data at unprecedented scale, but keeping your processing engine fed can be a challenge. Upstream data sources can 'drift' due to infrastructure, OS and application changes, causing ETL tools and hand-coded solutions to fail. StreamSets Data Collector (SDC) is an open source platform for building big data ingest pipelines that allows you to design, execute and monitor robust data flows. In this session we'll look at how SDC's "intent-driven" approach keeps the data flowing, whether you're processing data 'off-cluster', in Spark, or in MapReduce.
StreamSets software delivers performance management for data flows that feed the next generation of big data applications. Its mission is to bring operational excellence to the management of data in motion, so that data arrives on time and with quality, accelerating analysis and decision making. StreamSets Data Collector is in use at hundreds of companies where it brings unprecedented visibility into and control over data as it moves between an expanding variety of sources and destinations.
Speakers:
Pat Patterson has been working with Internet technologies since 1997, building software and working with communities at Sun Microsystems, Huawei, Salesforce and StreamSets. At Sun, Pat was the community lead for the OpenSSO open source project, while at Huawei he developed cloud storage infrastructure software. Part of the developer evangelism team at Salesforce, Pat focused on identity, integration and the Internet of Things. Now community champion at StreamSets, Pat is responsible for the care and feeding of the StreamSets open source community.
August 2016 HUG: Better together: Fast Data with Apache Spark™ and Apache Ign...Yahoo Developer Network
Spark and Ignite are two of the most popular open source projects in the area of high-performance Big Data and Fast Data. But did you know that one of the best ways to boost performance for your next generation real-time applications is to use them together? In this session, Dmitriy Setrakyan, Apache Ignite Project Management Committee Chairman and co-founder and CPO at GridGain will explain in detail how IgniteRDD — an implementation of native Spark RDD and DataFrame APIs — shares the state of the RDD across other Spark jobs, applications and workers. Dmitriy will also demonstrate how IgniteRDD, with its advanced in-memory indexing capabilities, allows execution of SQL queries many times faster than native Spark RDDs or Data Frames. Don't miss this opportunity to learn from one of the experts how to use Spark and Ignite better together in your projects.
Speakers:
Dmitriy Setrakyan, is a founder and CPO at GridGain Systems. Dmitriy has been working with distributed architectures for over 15 years and has expertise in the development of various middleware platforms, financial trading systems, CRM applications and similar systems. Prior to GridGain, Dmitriy worked at eBay where he was responsible for the architecture of an add-serving system processing several billion hits a day. Currently Dmitriy also acts as PMC chair of Apache Ignite project.
First part of the talk will describe the anatomy of a typical data pipeline and how Apache Oozie meets the demands of large-scale data pipelines. In particular, we will focus on recent advancements in Oozie for dependency management among pipeline stages, incremental and partial processing, combinatorial, conditional and optional processing, priority processing, late processing and BCP management. Second part of the talk will focus on out of box support for spark jobs.
Speakers:
Purshotam Shah is a senior software engineer with the Hadoop team at Yahoo, and an Apache Oozie PMC member and committer.
Satish Saley is a software engineer at Yahoo!. He contributes to Apache Oozie.
No C-QL (Or how I learned to stop worrying, and love eventual consistency) (N...Brian Brazil
Traditional relational databases focus on ACID, providing strong semantics that require careful synchronisation between actors that limit scalability. NoSQL Column Stores such as Cassandra, Riak and Dynamo offer another way, by eschewing strong consistency you can meet your application's needs while also increasing scalability and reliability. This talk will cover how and where to use eventual consistency.
Monitoring Big Data Systems - "The Simple Way"Demi Ben-Ari
Once you start working with distributed Big Data systems, you start discovering a whole bunch of problems you won’t find in monolithic systems.
All of a sudden to monitor all of the components becomes a big data problem itself.
In the talk we’ll mention all of the aspects that you should take in consideration when monitoring a distributed system once you’re using tools like:
Web Services, Apache Spark, Cassandra, MongoDB, Amazon Web Services.
Not only the tools, what should you monitor about the actual data that flows in the system?
And we’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
Demi Ben-Ari is a Co-Founder and CTO @ Panorays.
Demi has over 9 years of experience in building various systems both from the field of near real time applications and Big Data distributed systems.
Describing himself as a software development groupie, Interested in tackling cutting edge technologies.
Demi is also a co-founder of the “Big Things” Big Data community: http://somebigthings.com/big-things-intro/
Prometheus Design and Philosophy by Julius Volz at Docker Distributed System Summit
Prometheus - https://github.com/Prometheus
Liveblogging: http://canopy.mirage.io/Liveblog/MonitoringDDS2016
Provisioning and Capacity Planning Workshop (Dogpatch Labs, September 2015)Brian Brazil
If you’ve ever worried that you may have an outage someday due to your production servers not being able to handle increased user traffic, then this workshop will help put you at ease. Learn the foundations and how to apply it to your services.
Contact me at brian.brazil@robustperception.io if you'd like to learn more.
Next generation alerting and fault detection, SRECon Europe 2016Dieter Plaetinck
There is a common belief that in order to solve more [advanced] alerting cases and get more complete coverage, we need complex, often math-heavy solutions based on machine learning or stream processing.
This talk sets context and pro's/cons for such approaches, and provides anecdotal examples from the industry, nuancing the applicability of these methods.
We then explore how we can get dramatically better alerting, as well as make our lives a lot easier by optimizing workflow and machine-human interaction through an alerting IDE (exemplified by bosun), basic logic, basic math and metric metadata, even for solving complicated alerting problems such as detecting faults in seasonal timeseries data.
https://www.usenix.org/conference/srecon16europe/program/presentation/plaetinck
Systems Monitoring with Prometheus (Devops Ireland April 2015)Brian Brazil
Monitoring means many things to many people. This talk looks at Systems Monitoring, that is how to keep an eye on a given system and use this as part of overall management of a system. This talk will cover Why one monitors, What to monitor, How to monitor, the general design of a monitoring system and how Prometheus is a good fit for this in terms of instrumentation, consoles, alerts, general system health and sanity.
Prometheus is a next-generation monitoring system publicly announced earlier this year, developed by companies including SoundCloud, locals Boxever and Docker. Since launch there has been wide-spread interest, and many community contributions.
For more information see http://prometheus.io or http://www.boxever.com/tag/monitoring
Counting with Prometheus (CloudNativeCon+Kubecon Europe 2017)Brian Brazil
Counters are one of the two core metric types in Prometheus, allowing for tracking of request rates, error ratios and other key measurements. Learn why are they designed the way they are, how client libraries implement them and how rate() works.
If you'd like more information about Prometheus, contact us at prometheus@robustperception.io
August 2016 HUG: Open Source Big Data Ingest with StreamSets Data Collector Yahoo Developer Network
Big data tools such as Hadoop and Spark allow you to process data at unprecedented scale, but keeping your processing engine fed can be a challenge. Upstream data sources can 'drift' due to infrastructure, OS and application changes, causing ETL tools and hand-coded solutions to fail. StreamSets Data Collector (SDC) is an open source platform for building big data ingest pipelines that allows you to design, execute and monitor robust data flows. In this session we'll look at how SDC's "intent-driven" approach keeps the data flowing, whether you're processing data 'off-cluster', in Spark, or in MapReduce.
StreamSets software delivers performance management for data flows that feed the next generation of big data applications. Its mission is to bring operational excellence to the management of data in motion, so that data arrives on time and with quality, accelerating analysis and decision making. StreamSets Data Collector is in use at hundreds of companies where it brings unprecedented visibility into and control over data as it moves between an expanding variety of sources and destinations.
Speakers:
Pat Patterson has been working with Internet technologies since 1997, building software and working with communities at Sun Microsystems, Huawei, Salesforce and StreamSets. At Sun, Pat was the community lead for the OpenSSO open source project, while at Huawei he developed cloud storage infrastructure software. Part of the developer evangelism team at Salesforce, Pat focused on identity, integration and the Internet of Things. Now community champion at StreamSets, Pat is responsible for the care and feeding of the StreamSets open source community.
August 2016 HUG: Better together: Fast Data with Apache Spark™ and Apache Ign...Yahoo Developer Network
Spark and Ignite are two of the most popular open source projects in the area of high-performance Big Data and Fast Data. But did you know that one of the best ways to boost performance for your next generation real-time applications is to use them together? In this session, Dmitriy Setrakyan, Apache Ignite Project Management Committee Chairman and co-founder and CPO at GridGain will explain in detail how IgniteRDD — an implementation of native Spark RDD and DataFrame APIs — shares the state of the RDD across other Spark jobs, applications and workers. Dmitriy will also demonstrate how IgniteRDD, with its advanced in-memory indexing capabilities, allows execution of SQL queries many times faster than native Spark RDDs or Data Frames. Don't miss this opportunity to learn from one of the experts how to use Spark and Ignite better together in your projects.
Speakers:
Dmitriy Setrakyan, is a founder and CPO at GridGain Systems. Dmitriy has been working with distributed architectures for over 15 years and has expertise in the development of various middleware platforms, financial trading systems, CRM applications and similar systems. Prior to GridGain, Dmitriy worked at eBay where he was responsible for the architecture of an add-serving system processing several billion hits a day. Currently Dmitriy also acts as PMC chair of Apache Ignite project.
First part of the talk will describe the anatomy of a typical data pipeline and how Apache Oozie meets the demands of large-scale data pipelines. In particular, we will focus on recent advancements in Oozie for dependency management among pipeline stages, incremental and partial processing, combinatorial, conditional and optional processing, priority processing, late processing and BCP management. Second part of the talk will focus on out of box support for spark jobs.
Speakers:
Purshotam Shah is a senior software engineer with the Hadoop team at Yahoo, and an Apache Oozie PMC member and committer.
Satish Saley is a software engineer at Yahoo!. He contributes to Apache Oozie.
Detailed design for a robust counter as well as design for a completely on-line multi-armed bandit implementation that uses the new Bayesian Bandit algorithm.
VISUG - Approaches for application request throttlingMaarten Balliauw
Speaking from experience building a SaaS: users are insane. If you are lucky, they use your service, but in reality, they probably abuse. Crazy usage patterns resulting in more requests than expected, request bursts when users come back to the office after the weekend, and more! These all pose a potential threat to the health of our web application and may impact other users or the service as a whole. Ideally, we can apply some filtering at the front door: limit the number of requests over a given timespan, limiting bandwidth, ...
In this talk, we’ll explore the simple yet complex realm of rate limiting. We’ll go over how to decide on which resources to limit, what the limits should be and where to enforce these limits – in our app, on the server, using a reverse proxy like Nginx or even an external service like CloudFlare or Azure API management. The takeaway? Know when and where to enforce rate limits so you can have both a happy application as well as happy customers.
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
OpenNebulaConf 2013 - Monitoring of OpenNebula installations by Florian Heigl OpenNebula Project
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
Bio:
I’ve been into virtualization and storage for a long time and I like the amount of abstraction OpenNebula offers. Professionally I have been a Unix systems administrator for most of my working life. I’ve also done systems integration and monitoring work on the Check_MK project. Now I’m one of very few Nagios experts in Germany that aren’t working for one of the 3-5 leading Nagios outfits and as such I’m able to speak freely about what I think works best for the users. My strength is simply sitting down and listening to what people really need.
What are some of the performance implications of using lambdas and what strategies can be used to address these. When might be want an alternative to using a lambda and how can we design our APIs to be flexible in this regard. What are the principles of writing low latency code in Java? How do we tune and optimize our code for low latency? When don’t we optimize our code? Where does the JVM help and where does it get in our way? How does this apply to lambdas? How can we design our APIs to use lambdas and minimize garbage?
Processing Terabytes of data every day … and sleeping at night (infiniteConf ...Luciano Mammino
This is the story of how we built a highly available data pipeline that processes terabytes of network data every day, making it available to security researchers for security assessment and threat hunting.
Building this kind of stuff in the cloud is not that complicated, but if you have to make it near real-time, fault tolerant and 24/7 available, well... that's another story. In this talk, Luciano and Domagoj will tell you how they achieved this ambitious goal and how they missed a few good nights of sleep while trying to do that!
Spoiler alert: contains AWS, lambda, elastic search, monitoring, alerting & more!
Approaches for application request throttling - dotNetCologneMaarten Balliauw
Speaking from experience building a SaaS: users are insane. If you are lucky, they use your service, but in reality, they probably abuse. Crazy usage patterns resulting in more requests than expected, request bursts when users come back to the office after the weekend, and more! These all pose a potential threat to the health of our web application and may impact other users or the service as a whole. Ideally, we can apply some filtering at the front door: limit the number of requests over a given timespan, limiting bandwidth, ...
In this talk, we’ll explore the simple yet complex realm of rate limiting. We’ll go over how to decide on which resources to limit, what the limits should be and where to enforce these limits – in our app, on the server, using a reverse proxy like Nginx or even an external service like CloudFlare or Azure API management. The takeaway? Know when and where to enforce rate limits so you can have both a happy application as well as happy customers.
Speaking from experience building MyGet.org: users are insane. If you are lucky, they use your service, but in reality, they probably abuse. Crazy usage patterns resulting in more requests than expected, request bursts when users come back to the office after the weekend, and more! These all pose a potential threat to the health of our web application and may impact other users or the service as a whole. Ideally, we can apply some filtering at the front door: limit the number of requests over a given timespan, limiting bandwidth, ...
In this talk, we’ll explore the simple yet complex realm of rate limiting. We’ll go over how to decide on which resources to limit, what the limits should be and where to enforce these limits – in our app, on the server, using a reverse proxy like Nginx or even an external service like CloudFlare or Azure API management. The takeaway? Know when and where to enforce rate limits so you can have both a happy application as well as happy customers.
Jeremy Edberg (MinOps ) - How to build a solid infrastructure for a startup t...Startupfest
You're building your startup and you know it will be big. You don't want to spend a lot of time on infrastructure, but you also don't want to be putting out fires after you get mentioned on Hacker News. In this session, we will give you real practical tips that you can take home with you on building an infrastructure that will scale quickly with minimal up front work on your part, using time tested techniques in infrastructure as code, SaaS, and Serverless, among other things.
Similar to Rate Limiting at Scale, from SANS AppSec Las Vegas 2012 (20)
Fixing security by fixing software developmentNick Galbreath
Fixing Security by Fixing Software Development Using Continuous Deployment
Do you have an effective release cycle? Is your process long and archaic? Long release cycle are typically based on assumptions we haven't seen since the 1980s and require very mature organizations to implement successfully. They can also disenfranchise developers from caring or even knowing about security or operational issues. Attend this session to learn more about an alternative approach to managing deployments through Continuous Deployment, otherwise known as Continuous Delivery. Find out how small, but frequent changes to the production environment can transform an organization’s development process to truly integrate security. Learn how to get started with continuous deployment and what tools and process are needed to make implementation within your organization a (security) success.
SQL-RISC: New Directions in SQLi Prevention - RSA USA 2013Nick Galbreath
What if we could reduce SQLi attacks in your application by 90%? WIth little to no changes in your application, with no new hardware or firewalls?
First presentated at RSA Conference USA, 2013-02-27
Rebooting Software Development - OWASP AppSecUSA Nick Galbreath
If we are ever going to get ahead of the whack-a-mole security vulnerability game, we, as security professionals need to start getting involved more in the development of software. Let's review the origins of the traditional software development, and what assumptions are made. Then we'll review if those assumptions still hold for modern web applications, and what problems they cause, especially for security. Continuous deployment helps address these problems and allows for faster, more secure development. It's more than just "pushing code a lot", when done correctly it can be transformative to the organization. We'll discuss what continuous deployment is, how to get started, and what components are needed to make it successful, and secure.
libinjection and sqli obfuscation, presented at OWASP NYCNick Galbreath
SQL that isn't caught by WAFs but also isn't used (yet) by attackers! Why detecting SQLi is good, and why doing it with regular expressions is hard. And re-introducing libinjections which is a new way of detecting SQLi attacks.
This is a mashup of my Black Hat USA 2012 and DEFCON 20 talks, refreshed and updated.
Continuous Deployment - The New #1 Security Feature, from BSildesLA 2012Nick Galbreath
First presented at Security BSidesLA, Hermosa Beach, California, August 16, 2012
Continuous deployment is characters by a small and frequent changes to production. Find out why it's my #1 security feature. It's not just about pushing fast!
How do fonts look when uploaded onto slideshare when the presentation is of various sides? How does it look on a washed-out projector? For plain text? For computer-code?
This presentation provides a number of sans-serif and monospace fonts to help answer these questions.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Rate Limiting at Scale, from SANS AppSec Las Vegas 2012
1. Rate-Limiting
at Scale
SANS AppSec Las Vegas 2012
Nick Galbreath @ngalbreath nickg@etsy.com
2. Who is Etsy? nick?
• “Marketplace for Small Creative
Businesses”
• Alexa says #51 for USA traffic
• > $500MM transaction volume last year
• Billions and Billions of page views
• Nick Galbreath Director of Engineering
focusing on Security, Fraud, and other fun
stuff
3. What’s a Rate Limit?
Maximum number of events
per (brief) period per user
after which the resource is denied.
e.g. “no more than 2 logins per minute”
5. Robots gone Wild
• Robots / Crawlers (not always an intended
DDoS)
• 20,000 items in shopping cart
• spam attack!
• Can crush sites very quickly, at almost no
cost. Especially when crawl generates load
or writes to the database
6. Humans are Resources too
• Rate limits needed for anything that gets
reviewed by humans such as customer
service requests.
• CRMs are typically bad at dealing with
spammy stuff
7. Anything Involving
Money
• Without rate limits on credit card
authorizations your site becomes a card
skimmer site.
• Using a website is much easier than going
to the gas station pump or other
anonymous card reader
9. Do Rate Limits Stop all
Fraud? No, but...
• Eliminates false positives and punks
• Allows you to focus on more sophisticated
attacks
• Protects against damaging bursts of activity
(malicious or not)
10. Rate Limits are needed
on anything that
depends on an external
resource
This is almost everything!
16. Ouch!
• At scale, this is really painful for databases
to handle.
• Constant binary-tree index churn
• Use in-memory database (or run off
ramdisk) if trying this out
17. Quantized Rate Limits
• Stores a count in a time-window or bucket.
• Map current time to a bucket
• (int) (NOW()/period) e.g.
NOW()/3600 is gives the hour bucket.
19. Direct Lookup
• Everything is a primary key lookup.
userid-event-period-bucketid
60min: “nickg-login-3600-5589007547”
10min: “nickg-login-600-33534045284”
• Multiple time-frames require multiple
buckets, which means multiple inserting and
checking.
20. Quantized RL Accuracy
Not exact.
If you set N per Period, quantized rate-limits
may go as high as:
(n-1)x2 per Period.
e.g. 10 per minute --> 18 per minute
Yikes. Maths!
22. Rate-Limits at Scale
• We traded exact accuracy and flexibility for
scaling.
• Implementation using Memcache or Redis
(and perhaps SQL)
set nickg-login-60-212331231 += 1
• Well known sharding techniques
• Auto-expiration of old buckets
• Each set/get takes 1/10 or less of
millisecond. Almost invisible.
23. Memory
• Say 256 bytes per bucket
• 10,000,000 buckets is a lot of bucket
• But is only 2G, and fixed
• This is easy on one machine.
25. Please write unit tests!
• Easy to get wrong, and consequences can
be unpleasant
• Edge cases and race conditions
• memcache doesn’t have a “insert or
increment” operation. Need to do
multiple steps and check error
conditions.
26. Please make an API
• Make it simple for anyone to add rate
limiting to their code.
• Make it one line
// event, period, max events
if (rate_limit_exceed("signin", 60, 5)) {
// do something
}
27. Rollout
• Once in production start with guestimates
on rate limits
• If rate limit is triggered, take no action and
only log/graph
• Does volume match expectations?
• Wash, Rinse, Repeat until tuned
appropriately
28. oh yeah, don’t forget
Put your
rate-limit
datastore
behind the
firewall
29. So a user hit a rate
limit. Now what?
a dialog with product, customer service and engineering
• Do you let them know? (visible indicator)
• Do you start CAPTCHA-ing?
• Do you black hole it? (silent)
Also keep logging and graphing. You’ll need these
to debug when things go awry.
31. I feel bad if I don’t use a
graph in a presentation
CAPTCHA
Etsy API
32. How we do it
• We use Graphite for real-time graphing
http://graphite.wikidot.com/
• We use StatsD as our API
http://etsy.me/dQwVXi
https://github.com/etsy/statsd
• Our apps do this
StatsD::increment('signins');
UDP based -- can’t break the application
33. Division Built-in!
Combine, Mix and Match data in Graphite to
discover new insights.
Seasonal data.
Hard to alert on
But ratio of them is
nearly constant.
Easy to alert on.
Who knew 1 in 5 logins
are failures is universal?!
p.s. Holt-Winters exponential smoothing is also built in
35. Laddering
• Use laddering to do rate limits at different
time scales for the same event.
• Set a short period and high rate to prevent
bursts
• Then set a longer period with lower rate to
prevent slow crawls robots.
36. Ladder longer periods
to have a smaller rate
Negative example:
2 per Minute ( ~0.033 events per sec )
or 2x60 = 120 per Hour
so laddering with
300 per Hour (~ 0.083 events per sec)
does nothing, but
100 per Hour (~ 0.028) is good.
oh no! the maths again!
37. In Pictures...
Rate limit of “3 per 1 box” - ok
Rate Limit 5 per 3 boxes -- alert! (good)
but, say, rate limit 100 per 3 boxes does nothing
and is impossible to trigger
39. Anonymous Users
• hash of (IP + appropriate HTTP headers)
• order of headers matters
different browsers order them differently
• Spoofed user agents don’t always get the
order right
Different type of
Anonymous User
40. Rate Limit Every IP?
• Probably just Class C (only 16M of them)
• Maybe useful for just alerting
• Probably need whitelisting (e.g. AOL)
41. Rate Limit Datacenters
http://github.com/client9/ipcat
Datacenter / Rent-A-Slice / “hands not on
keyboard” / leaseable CPU and network
How much traffic is coming
from them?
43. • Almost every action on Etsy has laddered
rate-limit
• We learn the hard way what is not limited
• Virtually no performance impact at scale
• Should we open source the driver?