This document discusses testing AI systems for bias. It begins by defining bias and how it can occur in machine learning models due to choices in training data and definitions of success. The document then provides examples of organizational values like equality, customer satisfaction, and environmental protection that AI systems could be designed to reflect. It suggests testing systems by defining hypotheses based on these values, collecting proxy data to measure the values, establishing test scenarios, and comparing results to data and goals to identify unintended biases. The goal is for ML models to make decisions aligned with an organization's values rather than just business metrics alone.
Testing a movingtarget_quest_dynatracePeter Varhol
How do we test machine learning and adaptive systems? This presentation provides information on what these systems are, how they work, and how we might devise a testing strategy.
Your Agile Leadership Journey: Leading People-Managing Paradoxes - Agile Char...Paul Boos
This is the workshop Nicole and I gave at Agile Charm 2020 on Leading people through paradoxes, some of which are described directly in the Manifesto for Agile Software Development. It helps you understand how to use Polarity Maps as leaders for a thinking tool to understand your system.
Are you ready for Data science? A 12 point testBertil Hatt
Presentation for the MancML on data readiness.
If you are considering starting to invest in Data science, this is a helpful guide to understand:
- what you need *before* you start looking for a Data scientist
- the skillset and experience that you should be looking for when you do.
Testing a movingtarget_quest_dynatracePeter Varhol
How do we test machine learning and adaptive systems? This presentation provides information on what these systems are, how they work, and how we might devise a testing strategy.
Your Agile Leadership Journey: Leading People-Managing Paradoxes - Agile Char...Paul Boos
This is the workshop Nicole and I gave at Agile Charm 2020 on Leading people through paradoxes, some of which are described directly in the Manifesto for Agile Software Development. It helps you understand how to use Polarity Maps as leaders for a thinking tool to understand your system.
Are you ready for Data science? A 12 point testBertil Hatt
Presentation for the MancML on data readiness.
If you are considering starting to invest in Data science, this is a helpful guide to understand:
- what you need *before* you start looking for a Data scientist
- the skillset and experience that you should be looking for when you do.
Your Agile Leadership Journey: Leading People, Managing ParadoxesPaul Boos
This is the session given at March's AgileNoVA meet-up and is intended for Agile & Beyond 2019. It was also submitted for Agile2019 in the Leadership track.
Getting better all the time – and Fast! How Agile drives marketing excellence Angela Bates
Presented at WeConnect International COnference, November 16th 2016 at IBM South Bank
Agile innovation methods have revolutionised the software development world for many years. Now agile methodologies—which involve new values, principles and practices, are spreading across a broad range of industries and functions including marketing. Hear why adopting an Agile approach can offer game-changing potential to launch organisations into a new phase of marketing excellence
Keynote address given at Campus LISA, UCSD, July 2014. Abstract: Technical professionals act as brokers in organizations, communicating across functional and organizational boundaries. As brokers, tech professionals straddle two worlds, and can leverage this space if they know how. This talk will focus on understanding the gap between the IT function and management and how to bridge that gap in order to increase respect for IT among management, and improve your relationship with management. The talk will also cover identifying your (or your team’s) role within the organization, effective communication with upper management, positioning yourself and your team to increase visibility, and becoming a strategic partner.
I have an app idea, now what (ascendle) (ProductCamp Boston 2016)ProductCamp Boston
You have a great idea for an app...but you aren't technical. How do you get your app built? Where do you start? What are your options?
In this session, software expert Dave Todaro will outline your options. He'll explain the benefits and drawbacks of each approach and give you tips to get what you want and avoid getting burned.
You'll learn:
Various approaches including a technical co-founder, contractors, and outsourcing.
The difference between on-shoring, near-shoring, offshoring and hybrid models.
Credibility indicators to look for when choosing a software development partner.
When it's the right move to hire your own in-house team.
What to keep in mind for after your product ships.
About Dave Todaro
President/COO of @Ascendle, a firm specializing in agile coaching, software product strategy and commercial-grade mobile and web app development. bio from Twitter
Dave has spent the last 30+ years designing and building mission-critical software applications in a variety of leadership roles.
He is Founder, President and COO of Ascendle, a contract software development firm in Southern New Hampshire that specializes in custom cloud and mobile solutions for small to midsize businesses.
Dave started programming at age 11 and shipped his first commercial software product at age 15. He is the former chairman of the Software Association of New Hampshire.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Amp Up Your Testing by Harnessing Test DataTechWell
The data tsunami is coming—or maybe it’s already here. Data science, big data, and machine learning are the buzzwords of the day. Data is changing our products and the way we build them, so we should also change the way we verify our products. In a world of increasing connectivity and accelerated deadlines, data can provide an edge. But what role should data play in assessing the quality of software? Where does it make sense to use data, and where is it inappropriate? Steve Rowe covers the history of how data fits into testing, explains why data is an important tool to have in your quality toolkit, and presents strategies for adding data to your testing plans and using it more effectively in your testing.
Using AI to Build Fair and Equitable WorkplacesData Con LA
Data Con LA 2020
Description
With recent events putting a spotlight on anti-racism, social-justice, climate change, and mental health there's a call for increased ethics and transparency in business. Companies are, rightfully, feeling responsible for providing underrepresented employees with the same treatment and opportunities as their majority counterparts. AI can, and will, be used to help companies understand their environment, develop strategies for improvement and monitor progress. And, as AI is used to make increasingly complex and life-changing decisions, it is critical to ensure that these decisions are fair, equitable and explainable. Unfortunately, it is becoming increasingly clear that, much like humans, AI can be biased. It is therefore imperative that as we develop AI solutions, we are fully aware of the dangers of bias, understand how bias can manifest and know how to take steps to address and minimize it.
In this session you will learn:
*Definitions of fairness, regulated domains and protected classes
*How bias can manifest in AI
*How bias in AI can be measured, tracked and reduced
*Best practices for ensuring that bias doesn't creep into AI/ML models over time
*How explainability can be used to perform real-time checks on predictions
Speakers
Lawrence Spracklen, RSquared AI, Engineering Leadership
Sonya Balzer, RSquared.ai, Director of AI Marketing
A PSYCHOMETRIC ASSESSMENT IS THE RIGHT WAY TO HIRE EMPLOYEESThink Exam
With the number of candidates becoming large, judging the ability of candidates before hiring is becoming imperative. Decisions about onboarding a candidate can be made on the basis of cognitive tests that will assess the skills required for a certain job. Psychometric assessments are accurate as they are scientifically validated. Visit:
https://www.thinkexam.com/corporate/psychometric-assessment
Perhaps in no other professional field is the dichotomy between theory and practice more starkly different than in the realm of software testing. Researchers and thought leaders claim that testing requires a high level of cognitive and interpersonal skills, in order to make judgments about the ability of software to fulfill its operational goals. In their minds, testing is about assessing and communicating the risks involved in deploying software in a specific state.
However, in many organizations, testing remains a necessary evil, and a cost to drive down as much as possible. Testing is merely a measure of conformance to requirements, without regard to the quality of requirements or how conformance is measured. This is certainly an important measure, but tells an incomplete story about the value of software in support of our business goals.
We as testers often help to perpetuate the status quo. Although in many cases we realize we can add far more value than we do, we continue to perform testing in a manner that reduces our value in the software development process.
This presentation looks at the state of the art as well of the state of common practice, and attempts to provide a rationale and roadmap whereby the practice of testing can be made more exciting and stimulating to the testing professional, as well as more valuable to the product and the organization.
PASS Business Analytics 2015 - Most organizations lack an approach that lets them specify their requirements for BI or for analytics more broadly. Their ability to find opportunities for, and successfully use, more advanced analytics is limited. In this session, James Taylor will introduce decision modeling with DMN, a new standards-based approach to modeling decisions. He will introduce the core concepts of the approach and show how it can be used to drive more effective requirements for BI, dashboard and analytic projects. Attendees will learn how to begin with the decision in mind, defining their BI requirements in terms of the decision-making they need to improve.
The Analysis Part of Integration ProjectsBizTalk360
Many of us have follow well established practices for the development side of an integration project with tools like BizTalk, but even though we have been doing integration for many years a lot of projects still struggle with the process of working out what we need to do which puts a big burden on your development team to deliver a project with poor information about the requirements to be delivered. Often analysis before getting into development can be non existent or take a long time and still not capture the right information.
How do we do an effective job to get just the right amount of information to make life easy for a developer to deliver a good solution which is fit for purpose?
In this session Mike will share some ideas on this part of a project and the idea is to encourage some community activity to help people in this area.
As large organisations begin recognising the value of machine learning, there is a growing need for strong governance structures to ensure that risks are minimised. The scale at which machine learning models can make decisions combined with the scale at which enterprises operate at means that the potential rewards of machine learning are enormous, but so is the impact of poorly performing models, racist and sexist models and feedback loops. This presentation talks extensively about these issues, as well as providing a well governed process for machine learning development and deployment.
Your Agile Leadership Journey: Leading People, Managing ParadoxesPaul Boos
This is the session given at March's AgileNoVA meet-up and is intended for Agile & Beyond 2019. It was also submitted for Agile2019 in the Leadership track.
Getting better all the time – and Fast! How Agile drives marketing excellence Angela Bates
Presented at WeConnect International COnference, November 16th 2016 at IBM South Bank
Agile innovation methods have revolutionised the software development world for many years. Now agile methodologies—which involve new values, principles and practices, are spreading across a broad range of industries and functions including marketing. Hear why adopting an Agile approach can offer game-changing potential to launch organisations into a new phase of marketing excellence
Keynote address given at Campus LISA, UCSD, July 2014. Abstract: Technical professionals act as brokers in organizations, communicating across functional and organizational boundaries. As brokers, tech professionals straddle two worlds, and can leverage this space if they know how. This talk will focus on understanding the gap between the IT function and management and how to bridge that gap in order to increase respect for IT among management, and improve your relationship with management. The talk will also cover identifying your (or your team’s) role within the organization, effective communication with upper management, positioning yourself and your team to increase visibility, and becoming a strategic partner.
I have an app idea, now what (ascendle) (ProductCamp Boston 2016)ProductCamp Boston
You have a great idea for an app...but you aren't technical. How do you get your app built? Where do you start? What are your options?
In this session, software expert Dave Todaro will outline your options. He'll explain the benefits and drawbacks of each approach and give you tips to get what you want and avoid getting burned.
You'll learn:
Various approaches including a technical co-founder, contractors, and outsourcing.
The difference between on-shoring, near-shoring, offshoring and hybrid models.
Credibility indicators to look for when choosing a software development partner.
When it's the right move to hire your own in-house team.
What to keep in mind for after your product ships.
About Dave Todaro
President/COO of @Ascendle, a firm specializing in agile coaching, software product strategy and commercial-grade mobile and web app development. bio from Twitter
Dave has spent the last 30+ years designing and building mission-critical software applications in a variety of leadership roles.
He is Founder, President and COO of Ascendle, a contract software development firm in Southern New Hampshire that specializes in custom cloud and mobile solutions for small to midsize businesses.
Dave started programming at age 11 and shipped his first commercial software product at age 15. He is the former chairman of the Software Association of New Hampshire.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Amp Up Your Testing by Harnessing Test DataTechWell
The data tsunami is coming—or maybe it’s already here. Data science, big data, and machine learning are the buzzwords of the day. Data is changing our products and the way we build them, so we should also change the way we verify our products. In a world of increasing connectivity and accelerated deadlines, data can provide an edge. But what role should data play in assessing the quality of software? Where does it make sense to use data, and where is it inappropriate? Steve Rowe covers the history of how data fits into testing, explains why data is an important tool to have in your quality toolkit, and presents strategies for adding data to your testing plans and using it more effectively in your testing.
Using AI to Build Fair and Equitable WorkplacesData Con LA
Data Con LA 2020
Description
With recent events putting a spotlight on anti-racism, social-justice, climate change, and mental health there's a call for increased ethics and transparency in business. Companies are, rightfully, feeling responsible for providing underrepresented employees with the same treatment and opportunities as their majority counterparts. AI can, and will, be used to help companies understand their environment, develop strategies for improvement and monitor progress. And, as AI is used to make increasingly complex and life-changing decisions, it is critical to ensure that these decisions are fair, equitable and explainable. Unfortunately, it is becoming increasingly clear that, much like humans, AI can be biased. It is therefore imperative that as we develop AI solutions, we are fully aware of the dangers of bias, understand how bias can manifest and know how to take steps to address and minimize it.
In this session you will learn:
*Definitions of fairness, regulated domains and protected classes
*How bias can manifest in AI
*How bias in AI can be measured, tracked and reduced
*Best practices for ensuring that bias doesn't creep into AI/ML models over time
*How explainability can be used to perform real-time checks on predictions
Speakers
Lawrence Spracklen, RSquared AI, Engineering Leadership
Sonya Balzer, RSquared.ai, Director of AI Marketing
A PSYCHOMETRIC ASSESSMENT IS THE RIGHT WAY TO HIRE EMPLOYEESThink Exam
With the number of candidates becoming large, judging the ability of candidates before hiring is becoming imperative. Decisions about onboarding a candidate can be made on the basis of cognitive tests that will assess the skills required for a certain job. Psychometric assessments are accurate as they are scientifically validated. Visit:
https://www.thinkexam.com/corporate/psychometric-assessment
Perhaps in no other professional field is the dichotomy between theory and practice more starkly different than in the realm of software testing. Researchers and thought leaders claim that testing requires a high level of cognitive and interpersonal skills, in order to make judgments about the ability of software to fulfill its operational goals. In their minds, testing is about assessing and communicating the risks involved in deploying software in a specific state.
However, in many organizations, testing remains a necessary evil, and a cost to drive down as much as possible. Testing is merely a measure of conformance to requirements, without regard to the quality of requirements or how conformance is measured. This is certainly an important measure, but tells an incomplete story about the value of software in support of our business goals.
We as testers often help to perpetuate the status quo. Although in many cases we realize we can add far more value than we do, we continue to perform testing in a manner that reduces our value in the software development process.
This presentation looks at the state of the art as well of the state of common practice, and attempts to provide a rationale and roadmap whereby the practice of testing can be made more exciting and stimulating to the testing professional, as well as more valuable to the product and the organization.
PASS Business Analytics 2015 - Most organizations lack an approach that lets them specify their requirements for BI or for analytics more broadly. Their ability to find opportunities for, and successfully use, more advanced analytics is limited. In this session, James Taylor will introduce decision modeling with DMN, a new standards-based approach to modeling decisions. He will introduce the core concepts of the approach and show how it can be used to drive more effective requirements for BI, dashboard and analytic projects. Attendees will learn how to begin with the decision in mind, defining their BI requirements in terms of the decision-making they need to improve.
The Analysis Part of Integration ProjectsBizTalk360
Many of us have follow well established practices for the development side of an integration project with tools like BizTalk, but even though we have been doing integration for many years a lot of projects still struggle with the process of working out what we need to do which puts a big burden on your development team to deliver a project with poor information about the requirements to be delivered. Often analysis before getting into development can be non existent or take a long time and still not capture the right information.
How do we do an effective job to get just the right amount of information to make life easy for a developer to deliver a good solution which is fit for purpose?
In this session Mike will share some ideas on this part of a project and the idea is to encourage some community activity to help people in this area.
As large organisations begin recognising the value of machine learning, there is a growing need for strong governance structures to ensure that risks are minimised. The scale at which machine learning models can make decisions combined with the scale at which enterprises operate at means that the potential rewards of machine learning are enormous, but so is the impact of poorly performing models, racist and sexist models and feedback loops. This presentation talks extensively about these issues, as well as providing a well governed process for machine learning development and deployment.
How to choose the right Martech stack and Data for your organization DemandGen
There are 3,874 vendors listed in the 2016 Marketing Technology Landscape, and the phrase “MarTech stack” yields over 50,000 Google results. What’s a rational way to decide what you actually need?
Join experts from DemandGen and Openprise as they provide a strategic framework for deciding what systems and what data you need to be successful.
With the increasing access to big data, organizations are finding new ways to utilize this information within their talent acquisition strategy. During this Spotlight Webinar, we’ll focus on HR analytics and how organizations are leveraging this data to strengthen their recruiting strategies when identifying talent.
During this spotlight webinar, learners will:
Identify how analytics play a role in forecasting the time required to identify and hire candidates
Determine how to leverage analytics to strengthen recruiting strategy
Learn how vendor partnerships can provide HR analytics that support workforce planning.
My talk on AI for Human Resource Management at the Faculty Development Programme conducted by Department of Management Studies MVGR College of Engineering
Similar to Not fair! testing ai bias and organizational values (20)
This presentation illustrates the challenges of learning new skills while working at a high level with your current skills, and how we might balance the two. Presented at TechWell's Agile and DevOps West 2021.
Identifying and measuring testing debtPeter Varhol
This presentation, given at QAI QUEST on 24 May 2018, describes the concept of testing debt during the development process, how to identify it, how to measure it, and how to remediate it.
This presentation uses Moneyball as a starting point for investigating bias in building and managing technology teams, and in particular testing teams. It examines several biases researched by Nobel Prize winner Daniel Kahneman, and gives practical ways to avoid common biases in technology management.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
2. About me
• International speaker and writer
• Graduate degrees in Math, CS, Psychology
• Technology communicator
• AWS certified
• Former university professor, tech journalist
• Cat owner and distance runner
• peter@petervarhol.com
3. Gerie Owen
3
• Quality Engineering Architect
• Testing Strategist & Evangelist
• Test Manager
• Subject expert on testing for
TechTarget’s
SearchSoftwareQuality.com
• International and Domestic
Conference Presenter
Gerie.owen@gerieowen.com
4. What You Will Learn
• Why bias is often an outcome of machine learning results.
• How bias that reflects organizational values can be a desirable result.
• How to test bias against organizational values.
5. Agenda
• What is bias in AI?
• How does it happen?
• Is bias ever good?
• Building in bias intentionally
• Bias in data
• Summary
6. Bug vs. Bias
• A bug is an identifiable and measurable error in process or result
• Usually fixed with a code change
• A bias is a systematic inflection in decisions that produces results
inconsistent with reality
• Bias can’t be fixed with a code change
7. How Does This Happen?
• The problem domain is ambiguous
• There is no single “right” answer
• “Close enough” can usually work
• As long as we can quantify “close enough”
• We don’t know quite why the software
responds as it does
• We can’t easily trace code paths
• We choose the data
• The software “learns” from past actions
8. How Can We Tell If It’s Biased?
• We look very carefully at the training data
• We set strict success criteria based on the system requirements
• We run many tests
• Most change parameters only slightly
• Some use radical inputs
• Compare results to success criteria
9. Amazon Can’t Rid Its AI of Bias
• Amazon created an AI to crawl the web to find job candidates
• Training data was all resumes submitted for the last ten years
• In IT, the overwhelming majority were male
• The AI “learned” that males were superior for IT jobs
• Amazon couldn’t fix that training bias
10. Many Systems Use Objective Data
• Electric wind sensor
• Determines wind speed and direction
• Based on the cooling of filaments
• Designed a three-layer neural network
• Then used the known data to train it
• Cooling in degrees of all four filaments
• Wind speed, direction
11. Can This Possibly Be Biased?
• Well, yes
• The training data could have been recorded in single
temperature/sunlight/humidity conditions
• Which could affect results under those conditions
• It’s a possible bias that doesn’t hurt anyone
• Or does it?
• Does anyone remember a certain O-ring?
12. Where Do Biases Come From?
• Data selection
• We choose training data that represents only one segment of the domain
• We limit our training data to certain times or seasons
• We overrepresent one population
• Or
• The problem domain has subtly changed
13. Where Do Biases Come From?
• Latent bias
• Concepts become incorrectly correlated
• Correlation does not mean causation
• But it is high enough to believe
• We could be promoting stereotypes
• This describes Amazon’s problem
14. Where Do Biases Come From?
• Interaction bias
• We may focus on keywords that users apply incorrectly
• User incorporates slang or unusual words
• “That’s bad, man”
• The story of Microsoft Tay
• It wasn’t bad, it was trained that way
15. Why Does Bias Matter?
• Wrong answers
• Often with no recourse
• Subtle discrimination (legal or illegal)
• And no one knows it
• Suboptimal results
• We’re not getting it right often enough
16. It’s Not Just AI
• All software has biases
• It’s written by people
• People make decisions on how to design and implement
• Bias is inevitable
• But can we find it and correct it?
• Do we have to?
17. Like This One
• A London doctor can’t get into her fitness center locker room
• The fitness center uses a “smart card” to access and record services
• While acknowledging the problem
• The fitness center couldn’t fix it
• But the software development team could
• They had hard-coded “doctor” to be synonymous
with “male”
• It was meant as a convenient shortcut
18. About That Data
• We use data from the problem domain
• What’s that?
• In some cases, scientific measurements are accurate
• But we can choose the wrong measures
• Or not fully represent the problem domain
• But data can also be subjective
• We train with photos of one race over another
• We train with our own values of beauty
19. Is Bias Always Bad?
• Bias can result in suboptimal answers
• Answers that reflect the bias rather than rational thought
• But is that always a problem?
• It depends on how we measure our answers
• We may not want the most profitable answer
• Instead we want to reflect organizational values
• What are those values?
20. Examples of Organizational Values
• Committed with goals to equal hiring, pay, and promotion
• Will not exclude credit based on location, race, or other irrelevant
factor
• Will keep the environment cleaner than we left it
• Net carbon neutral
• No pollutants into atmosphere
• We will delight our customers
21. Examples of Organizational Values
• These values don’t maximize profit at the expense of everything
• They represent what we might stand for
• They are extremely difficult to train AI for
• Values tend to be nebulous
• Organizations don’t always practice them
• We don’t know how to measure them
• So we don’t know what data to use
• Are we achieving the desired results?
• How can we test this?
22. How Do We Design Systems With
These Goals in Mind?
• We need data
• But we don’t directly measure the goal
• Is there proxy data?
• Training the system
• Data must reflect goals
• That means we must know or suspect the data
is measuring the bias we want
23. Examples of Useful Data
• Customer satisfaction
• Survey data
• Complaints/resolution times
• Maintain a clean environment
• Emissions from operations/employee commute
• Recycling volume
• Equal opportunity
• Salary comparisons, hiring statistics
24. Sample Scenario
• “We delight our customers”
• AI apps make decisions on customer complaints
• Goal is to satisfy as many as possible
• Make it right if possible
• Train with
• Customer satisfaction survey results
• Objective assessment of customer interaction results
25. Testing the Bias
• Define hypotheses
• Map vague to operational definitions
• Establish test scenarios
• Specify the exact results expected
• With means and standard deviations
• Test using training data
• Measure the results in terms of definitions
26. Testing the Bias
• Compare test results to the data
• That data measures your organizational values
• Is there a consistent match?
• A consistent match means that the AI is accurately reflecting organizational
values
• Does it meet the goals set forth at the beginning of the project?
• Are ML recommendations reflecting values?
• If not, it’s time to go back to the drawing board
• Better operational definitions
• New data
27. Finally
• Test using real life data
• Put the application into production
• Confirm results in practice
• At first, side by side with human decision-makers
• Validate the recommendations with people
• Compare recommendations with results
• Yes/no – does the software reflect values
28. Back to Bias
• Bias isn’t necessarily bad in ML/AI
• But we need to understand it
• And make sure it reflects our goals
• Testers need to understand organizational values
• And how they represent bias
• And how to incorporate that bias into ML/AI apps
29. Summary
• Machine learning/AI apps can be designed to reflect organizational
values
• That may not result in the best decision from a strict business standpoint
• Know your organizational values
• And be committed to maintaining them
• Test to the data that represents the values
• As well as the written values themselves
• Draw conclusions about the decisions being made
30. Thank You
• Peter Varhol
peter@petervarhol.com
• Gerie Owen
gerie@gerieowen.com