This document describes the case of a 74-year-old female patient who presented to the emergency department with acute dyspnea and respiratory distress. Initial workup revealed signs of congestion on chest x-ray and elevated troponin levels. Her condition deteriorated rapidly, and echocardiogram showed widespread hypokinesis consistent with Takotsubo cardiomyopathy. She was transferred urgently to a primary PCI center where coronary angiogram confirmed the diagnosis. She was started on inotropic support and recovered without further intervention. The key lessons are to promptly perform echocardiography and ECG for patients in cardiogenic shock, provide inotropic support and vasopressors as needed, and rapidly transfer patients to centers
Cupping therapy is an ancient healing method that involves placing cups on the skin to enhance blood flow. It works by using suction to draw the skin and deeper tissues up underneath the cups. This increases circulation and brings impurities to the surface of the skin to be released. Cupping has been used for thousands of years in places like Egypt, China, and the Middle East to treat various health conditions by reducing inflammation and toxins in the body. It stimulates blood and lymph flow while strengthening the immune system. Cupping can help many issues like pain, respiratory problems, and skin conditions. It is a generally safe technique when performed properly by a trained practitioner.
This document discusses various protocols for anticoagulation during hemodialysis. It begins by noting that patients on hemodialysis are at risk of both bleeding and thrombosis. It then outlines several protocols for anticoagulation including unfractionated heparin (UFH) administered via constant infusion or intermittent bolus, and low molecular weight heparin (LMWH). LMWH has benefits over UFH like longer half-life and more predictable effects, but is also more expensive. The document also discusses heparin-free dialysis, regional citrate anticoagulation, and other alternatives to standard heparin protocols. Selection of the optimal anticoagulation method requires consideration of individual patient
The document discusses the history and development of hemodialysis adequacy measures. It describes how Frank Gotch and John Sargent developed the Kt/V measure in the 1970s to more accurately assess dialysis dose based on urea clearance. This resolved issues with prior methods that used target BUN levels. The document outlines the benefits of Kt/V over BUN and notes minimum recommended levels of Kt/V and URR to ensure adequate dialysis.
This document discusses sustained low-efficiency daily dialysis (SLEDD) for treating acute kidney injury (AKI) in critically ill patients. SLEDD is a hybrid therapy that combines aspects of continuous renal replacement therapy and intermittent hemodialysis. It allows for a reduced ultrafiltration rate and prolonged treatment duration to maximize dialysis dose while maintaining hemodynamic stability. The document outlines the indications for SLEDD, including patients at risk of disequilibrium or with borderline cardiovascular stability. Preliminary studies suggest SLEDD is a safe and effective option for AKI patients otherwise unsuitable for standard therapies.
The document discusses various guidelines and opinions on when to initiate dialysis for patients with chronic kidney disease. It notes that residual kidney function and signs of malnutrition or uremia are often used as criteria for determining when to start dialysis. However, the optimal timing remains controversial as there is no strong evidence from randomized controlled trials. Earlier initiation of dialysis could help prevent complications but also imposes additional burdens.
This document describes the case of a 74-year-old female patient who presented to the emergency department with acute dyspnea and respiratory distress. Initial workup revealed signs of congestion on chest x-ray and elevated troponin levels. Her condition deteriorated rapidly, and echocardiogram showed widespread hypokinesis consistent with Takotsubo cardiomyopathy. She was transferred urgently to a primary PCI center where coronary angiogram confirmed the diagnosis. She was started on inotropic support and recovered without further intervention. The key lessons are to promptly perform echocardiography and ECG for patients in cardiogenic shock, provide inotropic support and vasopressors as needed, and rapidly transfer patients to centers
Cupping therapy is an ancient healing method that involves placing cups on the skin to enhance blood flow. It works by using suction to draw the skin and deeper tissues up underneath the cups. This increases circulation and brings impurities to the surface of the skin to be released. Cupping has been used for thousands of years in places like Egypt, China, and the Middle East to treat various health conditions by reducing inflammation and toxins in the body. It stimulates blood and lymph flow while strengthening the immune system. Cupping can help many issues like pain, respiratory problems, and skin conditions. It is a generally safe technique when performed properly by a trained practitioner.
This document discusses various protocols for anticoagulation during hemodialysis. It begins by noting that patients on hemodialysis are at risk of both bleeding and thrombosis. It then outlines several protocols for anticoagulation including unfractionated heparin (UFH) administered via constant infusion or intermittent bolus, and low molecular weight heparin (LMWH). LMWH has benefits over UFH like longer half-life and more predictable effects, but is also more expensive. The document also discusses heparin-free dialysis, regional citrate anticoagulation, and other alternatives to standard heparin protocols. Selection of the optimal anticoagulation method requires consideration of individual patient
The document discusses the history and development of hemodialysis adequacy measures. It describes how Frank Gotch and John Sargent developed the Kt/V measure in the 1970s to more accurately assess dialysis dose based on urea clearance. This resolved issues with prior methods that used target BUN levels. The document outlines the benefits of Kt/V over BUN and notes minimum recommended levels of Kt/V and URR to ensure adequate dialysis.
This document discusses sustained low-efficiency daily dialysis (SLEDD) for treating acute kidney injury (AKI) in critically ill patients. SLEDD is a hybrid therapy that combines aspects of continuous renal replacement therapy and intermittent hemodialysis. It allows for a reduced ultrafiltration rate and prolonged treatment duration to maximize dialysis dose while maintaining hemodynamic stability. The document outlines the indications for SLEDD, including patients at risk of disequilibrium or with borderline cardiovascular stability. Preliminary studies suggest SLEDD is a safe and effective option for AKI patients otherwise unsuitable for standard therapies.
The document discusses various guidelines and opinions on when to initiate dialysis for patients with chronic kidney disease. It notes that residual kidney function and signs of malnutrition or uremia are often used as criteria for determining when to start dialysis. However, the optimal timing remains controversial as there is no strong evidence from randomized controlled trials. Earlier initiation of dialysis could help prevent complications but also imposes additional burdens.
This document discusses dry weight, which is the ideal post-dialysis weight that allows a patient to maintain normal blood pressure without medication until their next dialysis session. It explains that extracellular volume overload is a main cause of hypertension in dialysis patients. Achieving the correct dry weight through clinical assessment and trial and error allows blood pressure to be controlled in most patients. Dry weight can be difficult to determine accurately and must be regularly adjusted as patient factors like appetite and nutrition change over time.
1. The document discusses acute renal failure in ICU patients, including epidemiology, pathophysiology, and treatment options like continuous renal replacement therapy (CRRT).
2. It presents two case studies of patients with acute renal failure and discusses initiating CRRT for them based on their clinical status and indications.
3. Key aspects of CRRT are reviewed, including modes of treatment, dosing, anticoagulation options like citrate, and the process for starting patients on CRRT at the hospital.
The document discusses cupping therapy (al-hijama), summarizing that it is an ancient Chinese method using suction cups to treat various conditions by drawing blood to the skin's surface. It notes that the Prophet Muhammad practiced cupping therapy, and provides details on how cupping works, recommended times and locations for cupping on the body, different cupping techniques (wet and dry), potential benefits which can help various health issues, and possible minor side effects like temporary skin markings.
The document discusses the implementation of an occupational health and safety (OHS) system at a company. Various stakeholders debate the costs and benefits, with the CEO, manager, accountant, and workers expressing different views. The CEO argues it is necessary for regulatory compliance and staff safety. An option called OHSNETbase is presented as an online system that could streamline hazard reporting and management.
OHSNETbase is a rapidly deployed OHS Management System meaning minimal disruption to staff and production.
OHSNETbase is supported 24/7 so your system is available when you need it.
OHSNETbase is a hosted "solution in the can" so you do not need an IT system. Everything you need is at your fingertips.
Expecto Performa! The Magic and Reality of Performance TuningAtlassian
In the enterprise there are rarely simple solutions to highly nuanced problems that satisfy all needs. Several customers might each ask "How do I make Jira/Confluence faster?" and each require a different answer. Using this example, this talk will pick apart the inputs, outputs, concerns, and realities of answering a short question with a long answer. We'll then discuss real-world examples from our own internal instances, to give you a taste of the process we've gone through to solve our own performance problems, and to show why there is no simple playbook; "it depends" on a lot! The key takeaways are:
* The importance of having a shared definition of performance
* The importance of having agreed-upon priorities, including what isn't important
* The importance of measuring (allthethings) and understanding them
* The thing you think is the problem might not be the problem, and vice versa.
* The real world and the ideal world tend to look nothing alike!
The document discusses data-oriented design principles for game engine development in C++. It emphasizes understanding how data is represented and used to solve problems, rather than focusing on writing code. It provides examples of how restructuring code to better utilize data locality and cache lines can significantly improve performance by reducing cache misses. Booleans packed into structures are identified as having extremely low information density, wasting cache space.
To hit Ruby3x3, we must first figure out **what** we're going to measure, **how** we're going to measure it, in order to get what we actually want. I'll cover some standard definitions of benchmarking in dynamic languages, as well as the tradeoffs that must be made when benchmarking. I'll look at some of the possible benchmarks that could be considered for Ruby 3x3, and evaluate them for what they're good for measuring, and what they're less good for measuring, in order to help the Ruby community decide what the 3x goal is going to be measured against.
Moved to https://slidr.io/azzazzel/web-application-performance-tuning-beyond-xmxMilen Dyankov
This slide deck will be removed from here in the future. It has been moved to : https://slidr.io/azzazzel/web-application-performance-tuning-beyond-xmx
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
OpenNebulaConf 2013 - Monitoring of OpenNebula installations by Florian Heigl OpenNebula Project
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
Bio:
I’ve been into virtualization and storage for a long time and I like the amount of abstraction OpenNebula offers. Professionally I have been a Unix systems administrator for most of my working life. I’ve also done systems integration and monitoring work on the Check_MK project. Now I’m one of very few Nagios experts in Germany that aren’t working for one of the 3-5 leading Nagios outfits and as such I’m able to speak freely about what I think works best for the users. My strength is simply sitting down and listening to what people really need.
This document discusses practical code and data design topics such as cache optimization, generics, out of memory handling, pool allocators, sorting large data, and lock flags. It provides links to the author's social media and YouTube channel for additional information. The author aims to provide factual information and optimize for simple APIs, readable code, and data-oriented design.
Monitoring Big Data Systems Done "The Simple Way" - Demi Ben-Ari - Codemotion...Codemotion
Once you start working with Big Data systems, you discover a whole bunch of problems you won’t find in monolithic systems. Monitoring all of the components becomes a big data problem itself. In the talk, we’ll mention all of the aspects that you should take into consideration when monitoring a distributed system using tools like Web Services, Spark, Cassandra, MongoDB, AWS. Not only the tools, what should you monitor about the actual data that flows in the system? We’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
Monitoring Big Data Systems "Done the simple way" - Demi Ben-Ari - Codemotion...Demi Ben-Ari
Once you start working with distributed Big Data systems, you start discovering a whole bunch of problems you won’t find in monolithic systems.
All of a sudden to monitor all of the components becomes a big data problem itself.
In the talk we’ll mention all of the aspects that you should take in consideration when monitoring a distributed system once you’re using tools like:
Web Services, Apache Spark, Cassandra, MongoDB, Amazon Web Services.
Not only the tools, what should you monitor about the actual data that flows in the system?
And we’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
THE RISE AND FALL OF SERVERLESS COSTS - TAMING THE (SERVERLESS) BEASTOpher Dubrovsky
From talk given on June 10, 2020 at the DevTalks Reimagined conference.
ABSTRACT:
Serverless is an amazing beast of a technology. With it, you can quickly build and deploy incredible systems. You get out-of-the-box scalability and flexibility.Nevertheless, with great power comes great(er) cost!In this talk, you’ll learn about building a huge data pipeline using Serverless architecture, and how to tame the beast.After this session, you will understand the pitfalls to avoid and the great powers to exploit.
We can leverage Delta Lake, structured streaming for write-heavy use cases. This talk will go through a use case at Intuit whereby we built MOR as an architecture to allow for a very low SLA, etc. For MOR, there are different ways to view the fresh data, so we will also go over the methods used to perfTest the various ways that we were able to arrive at the best method for the given use case.
During the continuous mORMot refactoring, some core part of the framework was rewritten. In this session, we propose a journey to a refactoring of a single loop. It will take us from a naïve but working approach, to a 10 times faster Pascal rewrite, and then introduce how SSE2 and AVX2 assembly could boost the process even further – to reach more than 30 times improvement! No previous knowledge of assembly is needed: we will try to introduce how modern CPUs work, and will have some fun with algorithms and SIMD parallelism.
This document discusses dry weight, which is the ideal post-dialysis weight that allows a patient to maintain normal blood pressure without medication until their next dialysis session. It explains that extracellular volume overload is a main cause of hypertension in dialysis patients. Achieving the correct dry weight through clinical assessment and trial and error allows blood pressure to be controlled in most patients. Dry weight can be difficult to determine accurately and must be regularly adjusted as patient factors like appetite and nutrition change over time.
1. The document discusses acute renal failure in ICU patients, including epidemiology, pathophysiology, and treatment options like continuous renal replacement therapy (CRRT).
2. It presents two case studies of patients with acute renal failure and discusses initiating CRRT for them based on their clinical status and indications.
3. Key aspects of CRRT are reviewed, including modes of treatment, dosing, anticoagulation options like citrate, and the process for starting patients on CRRT at the hospital.
The document discusses cupping therapy (al-hijama), summarizing that it is an ancient Chinese method using suction cups to treat various conditions by drawing blood to the skin's surface. It notes that the Prophet Muhammad practiced cupping therapy, and provides details on how cupping works, recommended times and locations for cupping on the body, different cupping techniques (wet and dry), potential benefits which can help various health issues, and possible minor side effects like temporary skin markings.
The document discusses the implementation of an occupational health and safety (OHS) system at a company. Various stakeholders debate the costs and benefits, with the CEO, manager, accountant, and workers expressing different views. The CEO argues it is necessary for regulatory compliance and staff safety. An option called OHSNETbase is presented as an online system that could streamline hazard reporting and management.
OHSNETbase is a rapidly deployed OHS Management System meaning minimal disruption to staff and production.
OHSNETbase is supported 24/7 so your system is available when you need it.
OHSNETbase is a hosted "solution in the can" so you do not need an IT system. Everything you need is at your fingertips.
Expecto Performa! The Magic and Reality of Performance TuningAtlassian
In the enterprise there are rarely simple solutions to highly nuanced problems that satisfy all needs. Several customers might each ask "How do I make Jira/Confluence faster?" and each require a different answer. Using this example, this talk will pick apart the inputs, outputs, concerns, and realities of answering a short question with a long answer. We'll then discuss real-world examples from our own internal instances, to give you a taste of the process we've gone through to solve our own performance problems, and to show why there is no simple playbook; "it depends" on a lot! The key takeaways are:
* The importance of having a shared definition of performance
* The importance of having agreed-upon priorities, including what isn't important
* The importance of measuring (allthethings) and understanding them
* The thing you think is the problem might not be the problem, and vice versa.
* The real world and the ideal world tend to look nothing alike!
The document discusses data-oriented design principles for game engine development in C++. It emphasizes understanding how data is represented and used to solve problems, rather than focusing on writing code. It provides examples of how restructuring code to better utilize data locality and cache lines can significantly improve performance by reducing cache misses. Booleans packed into structures are identified as having extremely low information density, wasting cache space.
To hit Ruby3x3, we must first figure out **what** we're going to measure, **how** we're going to measure it, in order to get what we actually want. I'll cover some standard definitions of benchmarking in dynamic languages, as well as the tradeoffs that must be made when benchmarking. I'll look at some of the possible benchmarks that could be considered for Ruby 3x3, and evaluate them for what they're good for measuring, and what they're less good for measuring, in order to help the Ruby community decide what the 3x goal is going to be measured against.
Moved to https://slidr.io/azzazzel/web-application-performance-tuning-beyond-xmxMilen Dyankov
This slide deck will be removed from here in the future. It has been moved to : https://slidr.io/azzazzel/web-application-performance-tuning-beyond-xmx
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
OpenNebulaConf 2013 - Monitoring of OpenNebula installations by Florian Heigl OpenNebula Project
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
Bio:
I’ve been into virtualization and storage for a long time and I like the amount of abstraction OpenNebula offers. Professionally I have been a Unix systems administrator for most of my working life. I’ve also done systems integration and monitoring work on the Check_MK project. Now I’m one of very few Nagios experts in Germany that aren’t working for one of the 3-5 leading Nagios outfits and as such I’m able to speak freely about what I think works best for the users. My strength is simply sitting down and listening to what people really need.
This document discusses practical code and data design topics such as cache optimization, generics, out of memory handling, pool allocators, sorting large data, and lock flags. It provides links to the author's social media and YouTube channel for additional information. The author aims to provide factual information and optimize for simple APIs, readable code, and data-oriented design.
Monitoring Big Data Systems Done "The Simple Way" - Demi Ben-Ari - Codemotion...Codemotion
Once you start working with Big Data systems, you discover a whole bunch of problems you won’t find in monolithic systems. Monitoring all of the components becomes a big data problem itself. In the talk, we’ll mention all of the aspects that you should take into consideration when monitoring a distributed system using tools like Web Services, Spark, Cassandra, MongoDB, AWS. Not only the tools, what should you monitor about the actual data that flows in the system? We’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
Monitoring Big Data Systems "Done the simple way" - Demi Ben-Ari - Codemotion...Demi Ben-Ari
Once you start working with distributed Big Data systems, you start discovering a whole bunch of problems you won’t find in monolithic systems.
All of a sudden to monitor all of the components becomes a big data problem itself.
In the talk we’ll mention all of the aspects that you should take in consideration when monitoring a distributed system once you’re using tools like:
Web Services, Apache Spark, Cassandra, MongoDB, Amazon Web Services.
Not only the tools, what should you monitor about the actual data that flows in the system?
And we’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
THE RISE AND FALL OF SERVERLESS COSTS - TAMING THE (SERVERLESS) BEASTOpher Dubrovsky
From talk given on June 10, 2020 at the DevTalks Reimagined conference.
ABSTRACT:
Serverless is an amazing beast of a technology. With it, you can quickly build and deploy incredible systems. You get out-of-the-box scalability and flexibility.Nevertheless, with great power comes great(er) cost!In this talk, you’ll learn about building a huge data pipeline using Serverless architecture, and how to tame the beast.After this session, you will understand the pitfalls to avoid and the great powers to exploit.
We can leverage Delta Lake, structured streaming for write-heavy use cases. This talk will go through a use case at Intuit whereby we built MOR as an architecture to allow for a very low SLA, etc. For MOR, there are different ways to view the fresh data, so we will also go over the methods used to perfTest the various ways that we were able to arrive at the best method for the given use case.
During the continuous mORMot refactoring, some core part of the framework was rewritten. In this session, we propose a journey to a refactoring of a single loop. It will take us from a naïve but working approach, to a 10 times faster Pascal rewrite, and then introduce how SSE2 and AVX2 assembly could boost the process even further – to reach more than 30 times improvement! No previous knowledge of assembly is needed: we will try to introduce how modern CPUs work, and will have some fun with algorithms and SIMD parallelism.
Scaling Crittercism to 30,000 Requests Per Second and Beyond with MongoDBMongoDB
The document discusses MongoDB and how Crittercism scaled their database to handle over 30,000 requests per second. It begins with an overview of Crittercism's background and architecture, including how they started as a dating app and transitioned to a mobile analytics platform. It then covers their MongoDB router architecture, including how they evolved from a single mongos router to using a separate mongos tier. This helped reduce connections to config servers and propagation delays. It also discusses considerations for sharding and balancing performance as their database usage grew significantly.
Systems Monitoring with Prometheus (Devops Ireland April 2015)Brian Brazil
Monitoring means many things to many people. This talk looks at Systems Monitoring, that is how to keep an eye on a given system and use this as part of overall management of a system. This talk will cover Why one monitors, What to monitor, How to monitor, the general design of a monitoring system and how Prometheus is a good fit for this in terms of instrumentation, consoles, alerts, general system health and sanity.
Prometheus is a next-generation monitoring system publicly announced earlier this year, developed by companies including SoundCloud, locals Boxever and Docker. Since launch there has been wide-spread interest, and many community contributions.
For more information see http://prometheus.io or http://www.boxever.com/tag/monitoring
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2lGNybu.
Stefan Krawczyk discusses how his team at StitchFix use the cloud to enable over 80 data scientists to be productive. He also talks about prototyping ideas, algorithms and analyses, how they set up & keep schemas in sync between Hive, Presto, Redshift & Spark and make access easy for their data scientists, etc. Filmed at qconsf.com..
Stefan Krawczyk is Algo Dev Platform Lead at StitchFix, where he’s leading development of the algorithm development platform. He spent formative years at Stanford, LinkedIn, Nextdoor & Idibon, working on everything from growth engineering, product engineering, data engineering, to recommendation systems, NLP, data science and business intelligence.
Similar to Scaling to 30,000 Requests Per Second and Beyond with MongoDB (20)
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
8. ● Pick something and go with it
● Make mistakes along the way
● Correct the mistakes you can
● Work around the ones you can’t
How a Startup Gets Started
25. Single mongos per client problems we encountered:
Router Architecture
26. Router Architecture
Single mongos per client problems we encountered:
● thousands of connections to config servers
● config server CPU load
● configdb propagation delays
29. Router Architecture
Separate mongos tier advantages:
● greatly reduced number of connections to each mongod
● far fewer hosts talking to the config servers
● much faster configdb propagation
30. Router Architecture
Separate mongos tier advantages:
● greatly reduced number of connections to each mongod
● far fewer hosts talking to the config servers
● much faster configdb propagation
Disadvantages:
31. Router Architecture
Separate mongos tier advantages:
● greatly reduced number of connections to each mongod
● far fewer hosts talking to the config servers
● much faster configdb propagation
Disadvantages:
● additional network hop
● fewer points of failure
50. The Balancer and Me
Why wouldn’t you run the balancer in the first place?
● great question
● for us, it’s because we deleted a ton of data at one point, and left a
bunch of holes
○ we turned it off while deleting this data
○ and then were unable to turn it back on
● but maybe you start without it
● or maybe you need to turn it off for maintenance and forget to turn
it back on
Obviously, don’t do this. But if you do, here’s what happens...
51. The Balancer and Me
Fresh, new, empty cluster… But no balancer running.
69. So what can we do?
1. add IOPS
The Balancer and Me
70. So what can we do?
1. add IOPS
2. make sure your config servers have plenty of CPU (and IOPS)
The Balancer and Me
71. So what can we do?
1. add IOPS
2. make sure your config servers have plenty of CPU (and IOPS)
3. slowly move chunks manually
The Balancer and Me
72. So what can we do?
1. add IOPS
2. make sure your config servers have plenty of CPU (and IOPS)
3. slowly move chunks manually
4. approach a balanced state
The Balancer and Me
73. So what can we do?
1. add IOPS
2. make sure your config servers have plenty of CPU (and IOPS)
3. slowly move chunks manually
4. approach a balanced state
5. hold your breath
The Balancer and Me
74. So what can we do?
1. add IOPS
2. make sure your config servers have plenty of CPU (and IOPS)
3. slowly move chunks manually
4. approach a balanced state
5. hold your breath
6. try re-enabling the balancer
The Balancer and Me
76. How to manually balance:
1. determine a chunk on a hot shard
2. monitor effects on both the source and target shards
3. move the chunk
4. allow the system to settle
5. repeat
The Balancer and Me
77. How to manually balance:
1. determine a chunk on a hot shard
mongos> db.chunks.find({"shard":"<shard_name>",
"ns":"<db_name>.<collection>"}).limit(1).pretty()
You’ll get a single chunk (as both min and max); note its shard key and
ObjectId.
The Balancer and Me
78. How to manually balance:
1. determine a chunk on a hot shard
"min" : {
"unsymbolized_hash" :
"1572663b72e87[...]",
"_id" : ObjectId("50b97db98238[...]")
},
The Balancer and Me
79. How to manually balance:
1. determine a chunk on a hot shard
2. monitor effects on both the source and target shards
iostat -xhm 1
mongostat
The Balancer and Me
80. How to manually balance:
1. determine a chunk on a hot shard
2. monitor effects on both the source and target shards
3. move the chunk
mongos> sh.moveChunk("<db_name>.<collection>", {
"unsymbolized_hash" : "1572663b72e87[...]",
"_id" : ObjectId("50b97db98238[...]") },
"<target_shard>")
The Balancer and Me
81. How to manually balance:
1. determine a chunk on a hot shard
2. monitor effects on both the source and target shards
3. move the chunk
4. allow the system to settle
5. repeat
The Balancer and Me
83. ● Design ahead of time
o “NoSQL” lets you play it by ear
o but some of these decisions will bite you later
● Be willing to correct past mistakes
o dedicate time and resources to adapting
o learn how to live with the mistakes you can’t correct
Summary
84. References
● MongoDB Blog post:http://blog.mongodb.org/post/77278906988/crittercism-scaling-to-
billions-of-requests-per-day-on
● MongoDB Documentation on mongos
routers:http://docs.mongodb.org/master/core/sharded-cluster-query-routing/
● MongoDB Documentation on the
balancer:http://docs.mongodb.org/manual/tutorial/manage-sharded-cluster-balancer/
● MongoDB Documentation on shard
keys:http://docs.mongodb.org/manual/core/sharding-shard-key/
Crittercism: http://www.crittercism.com/
I’m going to tell you the story of how we’ve scaled to handle over 30k req/s using a storage strategy based on MongoDB
Between proposing this talk and now, we’ve actually grown some more, and now top 40-45k r/s on a daily basis
This is about 3.5B requests per day
this is a preview of a talk I’ll be giving at MongoDB World, June 23-25 in NYC
you can still register
and of course Crittercism will be there
some advice from our experience about things to do and things not to do
I’ll be sure to leave time for Q&A
I’ll tell you how Crittercism got started, some of the lessons we’ve learned along the way, and some advice we can share based on those experiences
September 2010 (from Wayback Machine)
Started as a “feedback widget”
Enable mobile app developers to allow their users to provide “criticism” of their apps (outside of the app store)
Not just a star rating
this is pretty easy -
set up a (mongo) db, put an api in front of it, collect user feedback from our SDK
added more types of data we collect
volume starts getting large, so let’s count app loads in a memory-based data store (redis), and persist it to mongo
then we added user metadata as well, but that’s a different kind of data and a different volume and access pattern, so let’s add dynamodb into the mix
our volume keeps going up, so let’s cache this app data to make our responses faster
then we added APM, which introduced a lot of different data types and structures
so we added another ingest API and postgres into the mix
(but obviously we’re not going to talk about that part here…)
today (2014) - what it’s evolved into
collecting tons of detailed analytics data - crash reports, groupings
Geo data launched in 2013 (just kidding, this is stored in postgres)
iPad app launched in 2014 - more aggregations of performance data (more ways to view it)
lots to deal with...
so we started as a way for people to “criticize” your apps
then we helped you catch bugs, so we’re the ones doing the “criticism”
so how do we handle 40k/s on mongodb?
we don’t, but that’s our ingest rate, and most of it ends up in mongodb
the takeaway here is to be willing to use whatever works
2-year period
went from 700/s (60M/day)
to 40-45k/s (3.8B/day)
one of the biggest things we did to help ourselves scale was to consolidate the mongos routers
default, first-pass architecture (for a sharded cluster): one mongos per client machine
each client process connects to a local mongos router
each mongos routes queries and returns results
could mean your application is reading stale data, or can’t find the data it needs when it needs it (and maybe it has to retry, which means it’s now slower)
move the mongos routers to their own tier
be smart about how you route to them
(we use chef to keep it within the same AZ)
be aware that this does introduce some disadvantages, too
This is a fundamental design decision that will have huge implications for a long time, so think about it carefully.
Hard (impossible) to change after the fact!
Say you have 4 shards. Let’s say each of the NHL teams that made the playoffs this year has an app, and we shard by app_id.
Say you have 4 shards. Let’s say each of the NHL teams that made the playoffs this year has an app, and we shard by app_id.
Let’s distribute them evenly, as is likely to be the case (assuming a sufficiently randomly-generated app_id)
this looks nice and even, right?
So now it’s time for the Western Conference Finals, and the Blackhawks are playing the Kings
So those 2 apps are going to get heavy use, but they’re on the same shard, so uh-oh...
Now this shard isn’t happy
Higher load, slower response time for queries to this shard (which are your most common queries due to these apps’ popularity)
so let’s add another shard
That might help if we have more teams’ apps to add
Those new apps had somewhere to go, to keep our cluster balanced
But this hasn’t helped our uneven access pattern at all
Only option now is to vertically scale the problem shard
and hopefully that cools it off, but now we have an uneven cluster to manage.
and what happens next year, when it’s two different teams in the conference finals?
maybe we get lucky and they’re on different shards… but even then, maybe the access is uneven enough that those 2 shards still get hot.
so maybe you just live with this and have heterogenous shard servers. (this is probably a much lesser evil than trying to re-shard.)
lesson: you’re going to have to live with the shard key you choose, so choose wisely!
another option might’ve been to spread data for each app_id across all shards--but then your queries will likely be slower (due to having to read from many/all shards).
it’s a trade-off.
The balancer is a super-important part of a sharded mongo cluster… You should love it.
Start with an empty cluster, and start filling it with data
(we’ll denote “fullness” by going from green to red)
This is an example of what can happen when the balancer is not running
Okay, so now we have a very unbalanced cluster. 3 of our replica sets are very full, one is pretty full, and the newest one is hardly in use.
(remember that the balancer isn’t running in this scenario)
The balancer will see the full shards and one near-empty one, and will want to move a ton of chunks all at once, causing severe I/O strain on the system.
(no way to tell the balancer to chill)
remember that all of these chunk moves are causing updates to your configdb, places load on your config servers, and has to propagate to all mongos routers, too
you’re going to be adding a lot of I/O to the system when you move chunks, and it still has to be able to perform its normal functions, so over-provision
we’re in AWS so we just go for PIOPS… but if you’re on physical hardware, consider RAIDing wider, or upgrading your SAN, or...
updating the configdb (when you move chunks) puts load on your config servers, so make sure they’re ready to handle it
this is tedious and will take a LONG time (more detail in a minute)
gradually you’ll get to a happier place
take a deep breath before you...
be ready to turn it off and return to step 3 if needed, then try again
(this was step 3)
here’s an example from our “rawcrashlog” collection (hash and _id truncated)
start both commands running on both the source and target
don’t need to specify source shard, since your shard key (unsymbolized_hash in our case) and _id are sufficient for mongo to know where it’s coming from
watch your monitoring (iostat/mongostat) -- look for spikes in page faults, queued reads/writes, database lock percentages.
obviously look at your application monitoring too, to ensure no adverse effects.
use MMS as well (e.g., lock %, page faults)
if everything looks good, keep going. if not, you need to start over with more IOPS, more config server capacity, etc.
seems obvious, but not always the case.
and if you’re not running it, you can embark on this tedious journey to get it running again.
best-case scenario is to make all of the right choices up front… but you’re probably not going to do that. (though hopefully you can learn a bit from our experience and minimize the wrong choices you make).
the good news is MongoDB is still working for us, despite the headaches we’ve had to deal with.
reminder that MongoDB World is right around the corner
along with all of these great presenters, I’ll be giving a version of this talk there, and would love to meet you