In computer programming, operator overloading, sometimes termed operator ad hoc polymorphism, is a specific case of polymorphism, where different operators have different implementations depending on their arguments. Operator overloading is generally defined by a programming language, a programmer, or both.
↓↓↓↓ Read More:
@ Kindly Follow my Instagram Page to discuss about your mental health problems-
-----> https://instagram.com/mentality_streak?utm_medium=copy_link
@ Appreciate my work:
-----> behance.net/burhanahmed1
Thank-you !
In computer programming, operator overloading, sometimes termed operator ad hoc polymorphism, is a specific case of polymorphism, where different operators have different implementations depending on their arguments. Operator overloading is generally defined by a programming language, a programmer, or both.
↓↓↓↓ Read More:
@ Kindly Follow my Instagram Page to discuss about your mental health problems-
-----> https://instagram.com/mentality_streak?utm_medium=copy_link
@ Appreciate my work:
-----> behance.net/burhanahmed1
Thank-you !
Stack and its Applications : Data Structures ADTSoumen Santra
Stack is an Abstract Data Type (ADT), Last in First out (LIFO) concept.
Features of Stack: Abstract Data Type (ADT)
C implementations of Stack's Functions like push(), pop(), isEmpty(), isOverflow(), peep()
Stack Applications like Tower of Hanoi, Infix to Postfix Conversion, Postfix Evaluation, Parenthesis Checking etc.
Diagramatic Representation of Tower of Hanoi, Infix to Postfix Conversion & Postfix Evaluation
Queues
a. Concept and Definition
b. Queue as an ADT
c. Implementation of Insert and Delete operation of:
• Linear Queue
• Circular Queue
For More:
https://github.com/ashim888/dataStructureAndAlgorithm
http://www.ashimlamichhane.com.np/
Breadth First Search & Depth First SearchKevin Jadiya
The slides attached here describes how Breadth first search and Depth First Search technique is used in Traversing a graph/tree with Algorithm and simple code snippet.
Creating a Use Case
Jennifer LeClair
CIS 510
Instructor Name: Dr. Austin Umezurike
10/27/2016
Assignment 2:
Creating a Use Case
Introduction
With this paper I will show how a use case diagram should be used. I base this paper from fig. 3
– 11 pages 78 – 80 in our textbook titled: System Analysis and Design in a Changing World, 6th
edition, by Satzinger, Jackson, and Burd. In the Use Case Diagram that I make, I will depict a
use case for a RMO CSMS subsystem. I will also be describing the overview of the diagram. I
will also provide an analysis of the characters.
Use Case Introduction
An activity that a system performs is known as a use case. It is mostly in response to the
user. Use case analysis is a technique that is used for identifying the functional requirements of
the software system. A use case is to designate the point of view from a client and customer, this
is a use cases main purpose. An analytical role in the development process is done by the
developer. The other definition of a use case is as an objective or as an actor. Actors are with a
particular system and they want to achieve. In the use case diagram that I create, I will show the
actors and use cases for the RMO CSMS subsystem for marketing.
Marketing Subsystem
RMO CSMS
Marketing Merchandising
Overview
The overview of this use case diagram has the following: It shows the system boundary,
the association and the actors. The one that does the interaction with the system by entering or
receiving data is called a group, actor, external agent or person. Another part of the whole system
are the system boundaries. System boundaries are the computerized part of the application along
with the users who operate it. When a customer places a relationship between certain things such
as a certain employee in a department and an order, this would be a logical association. In my
diagram I have included two actors, one is representing marketing and the other represents
merchandising.
Analysis
The events and actions that define the interactions with a system and the role in order to
be able to discover a goal is a list of actions or steps in an event in a use case. The elements that
make up a use case diagram and the connections that are between a use case and the actors is an
association. This lets us know that there is communication between the actors and the use case.
On the marketing side they need to be able to update / add promotions, production and business
partners. On the merchandising side they need to be able to update / add production information
and accessory packages.
Summary
The important part of a use case diagram is that you can identi ...
Stack and its Applications : Data Structures ADTSoumen Santra
Stack is an Abstract Data Type (ADT), Last in First out (LIFO) concept.
Features of Stack: Abstract Data Type (ADT)
C implementations of Stack's Functions like push(), pop(), isEmpty(), isOverflow(), peep()
Stack Applications like Tower of Hanoi, Infix to Postfix Conversion, Postfix Evaluation, Parenthesis Checking etc.
Diagramatic Representation of Tower of Hanoi, Infix to Postfix Conversion & Postfix Evaluation
Queues
a. Concept and Definition
b. Queue as an ADT
c. Implementation of Insert and Delete operation of:
• Linear Queue
• Circular Queue
For More:
https://github.com/ashim888/dataStructureAndAlgorithm
http://www.ashimlamichhane.com.np/
Breadth First Search & Depth First SearchKevin Jadiya
The slides attached here describes how Breadth first search and Depth First Search technique is used in Traversing a graph/tree with Algorithm and simple code snippet.
Creating a Use Case
Jennifer LeClair
CIS 510
Instructor Name: Dr. Austin Umezurike
10/27/2016
Assignment 2:
Creating a Use Case
Introduction
With this paper I will show how a use case diagram should be used. I base this paper from fig. 3
– 11 pages 78 – 80 in our textbook titled: System Analysis and Design in a Changing World, 6th
edition, by Satzinger, Jackson, and Burd. In the Use Case Diagram that I make, I will depict a
use case for a RMO CSMS subsystem. I will also be describing the overview of the diagram. I
will also provide an analysis of the characters.
Use Case Introduction
An activity that a system performs is known as a use case. It is mostly in response to the
user. Use case analysis is a technique that is used for identifying the functional requirements of
the software system. A use case is to designate the point of view from a client and customer, this
is a use cases main purpose. An analytical role in the development process is done by the
developer. The other definition of a use case is as an objective or as an actor. Actors are with a
particular system and they want to achieve. In the use case diagram that I create, I will show the
actors and use cases for the RMO CSMS subsystem for marketing.
Marketing Subsystem
RMO CSMS
Marketing Merchandising
Overview
The overview of this use case diagram has the following: It shows the system boundary,
the association and the actors. The one that does the interaction with the system by entering or
receiving data is called a group, actor, external agent or person. Another part of the whole system
are the system boundaries. System boundaries are the computerized part of the application along
with the users who operate it. When a customer places a relationship between certain things such
as a certain employee in a department and an order, this would be a logical association. In my
diagram I have included two actors, one is representing marketing and the other represents
merchandising.
Analysis
The events and actions that define the interactions with a system and the role in order to
be able to discover a goal is a list of actions or steps in an event in a use case. The elements that
make up a use case diagram and the connections that are between a use case and the actors is an
association. This lets us know that there is communication between the actors and the use case.
On the marketing side they need to be able to update / add promotions, production and business
partners. On the merchandising side they need to be able to update / add production information
and accessory packages.
Summary
The important part of a use case diagram is that you can identi ...
#ATAGTR2021 Presentation : "Use of AI and ML in Performance Testing" by Adolf...Agile Testing Alliance
Interactive Session on "Use of AI and ML in Performance Testing" by Adolf Patel Performance Test Architect Cognizant at #ATAGTR2021.
#ATAGTR2021 was the 6th Edition of Global Testing Retreat.
The video recording of the session is now available on the following link: https://www.youtube.com/watch?v=ajyPSmmswpM
To know more about #ATAGTR2021, please visit:https://gtr.agiletestingalliance.org/
Slides from "Taking an Holistic Approach to Product Quality"Peter Marshall
This is the base material used during a half day workshop at expoQA 17 June 2019. Peter Marshall runs over the necessary technical, organisational, and improvement practices required to deliver high quality software. Deep dives into Continuous delivery, devops, organisational structures, agile and digital transformation.
Learning LWF Chain Graphs: A Markov Blanket Discovery ApproachPooyan Jamshidi
LWF Chain graphs were introduced by Lauritzen, Wermuth, and Frydenberg as a generalization of graphical models based on undirected graphs and DAGs. From the causality point of view, in an LWF CG: Directed edges represent direct causal effects. Undirected edges represent causal effects due to interference, which occurs when an individual’s outcome is influenced by their social interaction with other population members, e.g., in situations that involve contagious agents, educational programs, or social networks. The construction of chain graph models is a challenging task that would be greatly facilitated by automation.
Markov blanket discovery has an important role in structure learning of Bayesian network. It is surprising, however, how little attention it has attracted in the context of learning LWF chain graphs. In this work, we provide a graphical characterization of Markov blankets in chain graphs. The characterization is different from the well-known one for Bayesian networks and generalizes it. We provide a novel scalable and sound algorithm for Markov blanket discovery in LWF chain graphs. We also provide a sound and scalable constraint-based framework for learning the structure of LWF CGs from faithful causally sufficient data. With the use of our algorithm, the problem of structure learning is reduced to finding an efficient algorithm for Markov blanket discovery in LWF chain graphs. This greatly simplifies the structure-learning task and makes a wide range of inference/learning problems computationally tractable because our approach exploits locality.
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...Pooyan Jamshidi
We enable reliable and dependable self‐adaptations of component connectors in unreliable environments with imperfect monitoring facilities and conflicting user opinions about adaptation policies by developing a framework which comprises: (a) mechanisms for robust model evolution, (b) a method for adaptation reasoning, and (c) tool support that allows an end‐to‐end application of the developed techniques in real‐world domains.
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...Pooyan Jamshidi
Modern cyber-physical systems (e.g., robotics systems) are typically composed of physical and software components, the characteristics of which are likely to change over time. Assumptions about parts of the system made at design time may not hold at run time, especially when a system is deployed for long periods (e.g., over decades). Self-adaptation is designed to find reconfigurations of systems to handle such run-time inconsistencies. Planners can be used to find and enact optimal reconfigurations in such an evolving context. However, for systems that are highly configurable, such planning becomes intractable due to the size of the adaptation space. To overcome this challenge, in this paper we explore an approach that (a) uses machine learning to find Pareto-optimal configurations without needing to explore every configuration and (b) restricts the search space to such configurations to make planning tractable. We explore this in the context of robot missions that need to consider task timeliness and energy consumption. An independent evaluation shows that our approach results in high-quality adaptation plans in uncertain and adversarial environments.
Paper: https://arxiv.org/abs/1903.03920
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...Pooyan Jamshidi
Despite achieving state-of-the-art performance across many domains, machine learning systems are highly vulnerable to subtle adversarial perturbations. Although defense approaches have been proposed in recent years, many have been bypassed by even weak adversarial attacks. Previous studies showed that ensembles created by combining multiple weak defenses (i.e., input data transformations) are still weak. In this talk, I will show that it is indeed possible to construct effective ensembles using weak defenses to block adversarial attacks. However, to do so requires a diverse set of such weak defenses. Based on this motivation, I will present Athena, an extensible framework for building effective defenses to adversarial attacks against machine learning systems. I will talk about the effectiveness of ensemble strategies with a diverse set of many weak defenses that comprise transforming the inputs (e.g., rotation, shifting, noising, denoising, and many more) before feeding them to target deep neural network classifiers. I will also discuss the effectiveness of the ensembles with adversarial examples generated by various adversaries in different threat models. In the second half of the talk, I will explain why building defenses based on the idea of many diverse weak defenses works, when it is most effective, and what its inherent limitations and overhead are.
Transfer Learning for Performance Analysis of Configurable Systems:A Causal ...Pooyan Jamshidi
Modern systems (e.g., deep neural networks, big data analytics, and compilers) are highly configurable, which means they expose different performance behavior under different configurations. The fundamental challenge is that one cannot simply measure all configurations due to the sheer size of the configuration space. Transfer learning has been used to reduce the measurement efforts by transferring knowledge about performance behavior of systems across environments. Previously, research has shown that statistical models are indeed transferable across environments. In this work, we investigate identifiability and transportability of causal effects and statistical relations in highly-configurable systems. Our causal analysis agrees with previous exploratory analysis~\cite{Jamshidi17} and confirms that the causal effects of configuration options can be carried over across environments with high confidence. We expect that the ability to carry over causal relations will enable effective performance analysis of highly-configurable systems.
Integrated Model Discovery and Self-Adaptation of RobotsPooyan Jamshidi
Machine learn models efficiently under budget constraints to adapt to perturbations such as environmental changes or changes in the internal resources.
Modern software-intensive systems are composed of components that are likely to change their behaviour over time (e.g., adding/removing components).
For software to continue to operate under such changes, the assumptions about parts of the system made at design time may not hold at runtime due to uncertainty.
Mechanisms must be put in place that can dynamically learn new models of these assumptions and use them to make decisions about missions, configurations, etc.
Transfer Learning for Performance Analysis of Highly-Configurable SoftwarePooyan Jamshidi
A wide range of modern software-intensive systems (e.g., autonomous systems, big data analytics, robotics, deep neural architectures) are built configurable. These systems offer a rich space for adaptation to different domains and tasks. Developers and users often need to reason about the performance of such systems, making tradeoffs to change specific quality attributes or detecting performance anomalies. For instance, developers of image recognition mobile apps are not only interested in learning which deep neural architectures are accurate enough to classify their images correctly, but also which architectures consume the least power on the mobile devices on which they are deployed. Recent research has focused on models built from performance measurements obtained by instrumenting the system. However, the fundamental problem is that the learning techniques for building a reliable performance model do not scale well, simply because the configuration space is exponentially large that is impossible to exhaustively explore. For example, it will take over 60 years to explore the whole configuration space of a system with 25 binary options.
In this talk, I will start motivating the configuration space explosion problem based on my previous experience with large-scale big data systems in industry. I will then present my transfer learning solution to tackle the scalability challenge: instead of taking the measurements from the real system, we learn the performance model using samples from cheap sources, such as simulators that approximate the performance of the real system, with a fair fidelity and at a low cost. Results show that despite the high cost of measurement on the real system, learning performance models can become surprisingly cheap as long as certain properties are reused across environments. In the second half of the talk, I will present empirical evidence, which lays a foundation for a theory explaining why and when transfer learning works by showing the similarities of performance behavior across environments. I will present observations of environmental changes‘ impacts (such as changes to hardware, workload, and software versions) for a selected set of configurable systems from different domains to identify the key elements that can be exploited for transfer learning. These observations demonstrate a promising path for building efficient, reliable, and dependable software systems. Finally, I will share my research vision for the next five years and outline my immediate plans to further explore the opportunities of transfer learning.
Related Papers:
https://arxiv.org/pdf/1709.02280
https://arxiv.org/pdf/1704.00234
https://arxiv.org/pdf/1606.06543
Architectural Tradeoff in Learning-Based SoftwarePooyan Jamshidi
In classical software development, developers write explicit instructions in a programming language to hardcode the explicit behavior of software systems. By writing each line of code, the programmer instructs the software to have the desirable behavior by exploring a specific point in program space.
Recently, however, software systems are adding learning components that, instead of hardcoding an explicit behavior, learn a behavior through data. The learning-intensive software systems are written in terms of models and their parameters that need to be adjusted based on data. In learning-enabled systems, we specify some constraints on the behavior of a desirable program (e.g., a data set of input–output pairs of examples) and use the computational resources to search through the program space to find a program that satisfies the constraints. In neural networks, we restrict the search to a continuous subset of the program space.
This talk provides experimental evidence of making tradeoffs for deep neural network models, using the Deep Neural Network Architecture system as a case study. Concrete experimental results are presented; also featured are additional case studies in big data (Storm, Cassandra), data analytics (configurable boosting algorithms), and robotics applications.
Sensitivity Analysis for Building Adaptive Robotic SoftwarePooyan Jamshidi
P. Jamshidi, M. Velez, C. Kästner, N. Siegmund, and P. Kawthekar. Transfer learning for improving model predictions in highly configurable software. Int’l Symp. Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2017.
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...Pooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
Transfer Learning for Improving Model Predictions in Robotic SystemsPooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...Pooyan Jamshidi
https://arxiv.org/abs/1606.06543
Finding optimal configurations for Stream Processing Systems (SPS) is a challenging problem due to the large number of parameters that can influence their performance and the lack of analytical models to anticipate the effect of a change. To tackle this issue, we consider tuning methods where an experimenter is given a limited budget of experiments and needs to carefully allocate this budget to find optimal configurations. We propose in this setting Bayesian Optimization for Configuration Optimization (BO4CO), an auto-tuning algorithm that leverages Gaussian Processes (GPs) to iteratively capture posterior distributions of the configuration spaces and sequentially drive the experimentation. Validation based on Apache Storm demonstrates that our approach locates optimal configurations within a limited experimental budget, with an improvement of SPS performance typically of at least an order of magnitude compared to existing configuration algorithms.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
2. Learning Goals
Understand how to build a system that can
put the power of machine learning to use.
Understand how to incorporate ML-based
components into a larger system.
Understand the principles that govern
these systems, both as software and as
predictive systems.
3. Learning Goals
Understand challenges of building ML
systems.
Understand strategies of making ML
systems reactive.
I won’t teach you specific techniques
of machine learning, it simply does
not fit in one lecture course.
4. The topic of this lecture
belongs to here!
I also hang around here
Software
Arch.
Machine
Learning
Distributed
Systems
5. ML as a technique vs ML
as system in production
8. Who am I?
Postdoc at CMU, Fall’18: Prof. At USC
My research: SE+Systems+ML
Software dev/eng for 7 years
3+ yrs as architect
More info: https://pooyanjamshidi.github.io
10. What is the grand
difference?
Software systems are built from sets
of software components
ML(-based) systems are build from
sets of software components capable
of learning from data and making
predictions about the future
11. Case study
A startup that tries to build a
machine learning system from the
ground up and finds it very, very
hard!
12. Sniffable
Sniffable is the Instagram for dogs.
Dog owners post pictures of their
dogs,
and other dog owners (sniffers!)
like, share, and comment on those
pictures.
13. A New Feature for
Sniffable
Sniffers want to promote their
specific dog
Sniffers want their dogs to become a
celebrity!
So, they want a new tool to make
their pupdates, more viral.
14. Pooch Predictor
A new feature to figure out which
picture would get the biggest
response on Sniffable
A new feature to predict the number
of likes a given pupdate might get,
based on the hashtags used.
15. Why did the startup come
up with Pooch Predictor?
It would engage the dog owners,
help them create viral content,
and grow the Sniffable network as a
whole
21. System issues
appeared
Pooch Predictor wasn’t very reliable
Change in the database -> the queries
would fail
High load on the server -> the modeling
job would fail
More and more as the size of the social
network increased
22. Failures were hard
to detect
Failures were hard to detect without
building up more sophisticated
monitoring infrastructure.
There wasn’t much that could be
done other than restarting the job.
23. Modeling issues
Architect realized that some of the
features weren’t being correctly
extracted from the raw data.
It was also really hard to
understand how a change to the
features would impact modeling
performance.
24. A major issue
happened
For a couple of weeks, their interaction rates
steadily trended down.
For users outside of the US, Pooch Predictor
would always predict a negative number of likes!!
It was a problem with the modeling system’s
location-based features.
The data scientist and engineer paired to fix the
issue
25. Things went from
bad to worse!
Appeared when data scientist implementing
more feature-extraction functionality to
hopefully improve modeling quality.
The app slowed down dramatically.
New functionality caused exceptions to be
thrown on the server, which led to
pupdates being dropped.
26. What was the
reason?
Sending the data from the app back
to server took way too long to
maintain reasonable responsiveness.
The server would throw an
exception any time the prediction
functionality saw any of the new
features.
27. So what they did??
After understanding where things
had gone wrong, the team quickly
rolled back all of the new
functionality and restored the app
to a normal operational state.
They held a retrospective to build a
better system…
28. Building a better
system (operation)
Sniffle must remain responsive, regardless
of any problems with the predictive system.
The predictive component needs to be less
tightly coupled to the rest of the systems.
The predictive system needs to behave
predictably regardless of high load or
errors in the system itself.
29. Building a better
system (development)
It should be easier for different
developers to make changes to the
predictive system.
The code needs to use different
programming idioms that ensure
better performance.
30. Building a better
system (ML)
The predictive system must measure its modeling
performance better.
The predictive system should support evolution
and change.
The predictive system should support online
experimentation.
Easy for developers to supervise the predictive
system and rapidly correct any rogue behavior.
31. Sniffable team missed
something big, right?
They built what initially looked like a useful ML
system that added value to their core product.
Production issues with their ML system
frequently pulled the team away from work on
improvements to the capability of the system.
Even though they had a bunch of smart people
to develop model to predict the dynamics of
dog-based social networking, their system
repeatedly failed at its mission.
32. Building ML
systems is hard!
The data scientist knew how to do ML.
Pooch Predictor totally worked on his
laptop.
But the data scientist was not thinking
of ML as a system - but as a technique!
Pooch Predictor was a failure both as a
predictive system and as a software.
33. To build it right:
Reactive ML
Reactive ML = ML as a system
35. Starting from the
beginning
A ML system must collect data from the
outside world.
In the Pooch Predictor example, the team
was trying to skip this concern by using the
data that their application already had.
Obviously, this approach was quick, but it
tightly couples the Sniffable application data
model to the Pooch Predictor data model.
37. What is “feature”?
Features are meaningful data points
derived from raw data related to the
entity being predicted on.
A Sniffable example of a “feature”
would be ??
the number of friends a given dog
has.
38. What is the feature
vector?
The feature values for a given
instance are collectively called
a feature vector.
39. What is a concept?
A concept is the thing that the system is
trying to predict.
In the context of Pooch Predictor, a
concept would be the number of likes a
given post receives.
When a concept is discrete (i.e. not
continuous), it can be called a class label.
40. Types of learning
Supervised: given data, predict label
Unsupervised: given data, learn about
that data
RL: given data, choose action to
maximize expected long-term reward
Lex Fridman:
fridman@mit.edu
January
2018
Course 6.S191:
Intro to Deep Learning
For the full updated list of references visit:
https://selfdrivingcars.mit.edu/references
Types of Deep Learning
[81, 165]
Supervised
Learning
Unsupervised
Learning
Semi-Supervised
Learning Reinforcement
Learning
41. What is a “model”?
A program that maps from features to
predicted concepts.
For the Pooch Predictor, a program
that takes as input the feature
representation of the hashtag data and
returns the predicted number of likes
that a given pupdate might receive.
42. What is model
publishing?
Model publishing means making the
model program available outside of
the context it was learned in,
so that it can make predictions on
data it hasn’t seen before.
43. making the model program
available outside of the
context it was learned in
44. Remember any issue
regarding model publishing
in sniffable?
Sniffable team largely skipped it in their
original implementation.
They didn’t even set up their system to
retrain the model on a regular basis.
Their next approach at implementing
model retraining also ran into difficulty,
causing their models to be out of sync
with their feature extractors.
45. ML systems are different
from transactional systems
Some of the pooch predictor system problems
stemmed from treating their predictive system
like a transaction business application.
An approach that relies upon strong
consistency guarantees just doesn’t work for
modern distributed systems,
Out of synch with the pervasive and intrinsic
uncertainty in a machine learning system.
46. ML apps are
dynamic systems
Other problems the Sniffable team
experienced had to do with not
thinking about their system in
dynamic terms.
ML systems must evolve, and they
must support parallel tracks for that
evolution through experimentation.
50. ML Quality Attributes
for Reactive ML Systems
@jeffksmithjr @xdotaDevoxx
Reactive Systems
Responsive
Resilient Elastic
Message-Driven
[Most of these figures are taken from Jeff Smith’s slides and book]
52. Responsive
If a system doesn’t respond to its
users, then it’s useless.
Think of the Sniffable team causing
a massive slowdown in the Sniffable
app due to the responsiveness of
their ML system.
54. Resilient
Whether the cause is hardware, human
error, or design flaws, software always
breaks.
Providing some sort of acceptable
response even when things don’t go as
planned is a key part of ensuring that
users view a system as being responsive.
56. Elastic
Elastic systems should respond to
increases or decreases in load.
The Sniffable team saw this when
their traffic ramped up and the
Pooch Predictor system couldn’t
keep up with the load.
58. Message-driven
A loosely coupled system organized
around message passing can make it easier
to detect failure or issues with load.
Moreover, a design with this trait helps
contain any of the effects of errors
isolated, rather than flaming production
issues that needed to be immediately
addressed, as they were in Pooch Predictor.
59. Wait, how do you build a
system that actually has
these qualities?
Tactics
60. Replication
Data, whether at rest or in motion, should
be redundantly stored or processed.
In the Sniffable example, there was a time
when the server that ran the model-
learning job failed, and no model was
learned.
two or more model-learning jobs
61. An Example:
Spark Architecture
Rather than requiring
you to always have
two pipelines
executing, Spark
gives you automatic,
fine-grained
replication so that
the system can
recover from failure.
62. Containment
Prevent the failure of any single component
of the system from affecting any other
component.
Docker or rkt? no, this strategy isn’t about any
one implementation. Containment can be
implemented using many different systems.
The point is to prevent the sort of cascading
failure we saw in Pooch Predictor, and to do
so at a structural level.
63. A Contained Model-
Serving Architecture
Remember the model and the
features were out of sync, resulting
in exceptions during model
serving??
65. A Supervisory Architecture
The published models are observed by
the model supervisor. Should their
behavior ever deviate from acceptable
bounds, the supervisor would stop
sending them messages requesting
predictions.
70. Infinite Data
Ideally, with its new ML capabilities, Sniffable
is going to take off and see tons of traffic.
Some posts would have big dogs, others
small ones.
Some posts would use filters, and others
would be more natural.
Some would be rich in hashtags, and some
wouldn’t have any annotations.
71. Strategy 1: Laziness
Delay of execution, to separate the
composition of functions to execute
from their actual execution.
Conceive of the data flow in terms
of infinite streams instead of finite
batches.
73. Strategy 2: Pure
functions
Evaluating the function must not
result in some sort of side effect,
such as changing the state of some
variable or performing some I/O.
Functions can be passed to other
functions as arguments -> Higher
Order Functions
74. Uncertainty
Even before making a prediction, a ML
system must deal with the uncertainty of
the real world outside of the ML system.
For example, do sniffers using the
hashtag #adorabull mean the same thing
as sniffers using the hashtag #adorable,
or should those be viewed as different
features?
75. Strategy 3:
Immutable facts
Data that can simply be written once and never
modified
The use of immutable facts will allow us to
reason about uncertain views of the world at
specific points in time.
Having a complete record of all facts that
occurred over the lifetime of the system will also
enable important machine learning, like model
experimentation and automatic model validation.
84. Summary
ML offers a powerful toolkit for building
useful prediction systems.
Building an ML model is only part of a
larger system development process.
We went through practices in distributed
systems and software engineering in a
pragmatic approach to building real-world
ML systems.
85. Summary
ML can be viewed as an application, and
not just as a technique.
We learned how to incorporate ML-based
components into a larger complex system.
Reactive is a way to build sophisticated
systems, and some ML systems don’t need
to be sophisticated.