How do we protect the privacy of users when building large-scale AI based systems? How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of privacy-preserving AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Global Governance of Generative AI: The Right Way ForwardLilian Edwards
AI regulation has been a hot topic since the rise of machine learning (ML) in the “big data” era, but generative AI or “foundation models” tools like ChatGPT, DALL-E 2(now 3) and CoPilot, ike ML before them, may create serious societal risks, including embedding and outputting bias; generating fake news, illegal or harmful content and inadvertent “hallucinations”; infringing existing laws relating eg to copyright and privacy; as well as environmental, competition and workplace concerns.
Many nations are now considering regulation to address these worries, and can draw on a number of basic and hybrid models of governance. This paper canvasses models of mandatory comprehensive legislation (where the EU AI Act hopes to place itself as a gold standard model); vertical mandatory legislation (where China has quietly taken a lead); adapting existing law (see the many copyright lawsuits underway); and voluntary “soft law” such as codes of ethics, “blueprints”, or industry guidelines. Both the domestic and international regulatory scenes for AI are also increasingly politicised as the rise of "AI safety" hype shows. Against this backdrop what choices should smaller countries such as the UK and Australia make? will international harmonisation lead to a race to the top as with the GDPR, or the bottom - rule by tech for tech?
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/09/responsible-ai-tools-and-frameworks-for-developing-ai-solutions-a-presentation-from-intel/
Mrinal Karvir, Senior Cloud Software Engineering Manager at Intel, presents the “Responsible AI: Tools and Frameworks for Developing AI Solutions” tutorial at the May 2023 Embedded Vision Summit.
Over 90% of businesses using AI say trustworthy and explainable AI is critical to business, according to Morning Consult’s IBM Global AI Adoption Index 2021. If not designed with responsible considerations of fairness, transparency, preserving privacy, safety and security, AI systems can cause significant harm to people and society and result in financial and reputational damage for companies.
How can we take a human-centric approach to design AI solutions? How can we identify different types of bias and what tools can we use to mitigate those? What are model cards, and how can we use them to improve transparency? What tools can we use to preserve privacy and improve security? In this talk, Karvir discusses practical approaches to adoption of responsible AI principles. She highlights relevant tools and frameworks and explores industry case studies. She also discusses building a well-defined response plan to help address an AI incident efficiently.
Today, I will be presenting on the topic of
"Generative AI, responsible innovation, and the law."
Artificial Intelligence has been making rapid strides in recent years,
and its applications are becoming increasingly diverse.
Generative AI, in particular, has emerged as a promising area of innovation, the potential to create highly realistic and compelling outputs.
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Global Governance of Generative AI: The Right Way ForwardLilian Edwards
AI regulation has been a hot topic since the rise of machine learning (ML) in the “big data” era, but generative AI or “foundation models” tools like ChatGPT, DALL-E 2(now 3) and CoPilot, ike ML before them, may create serious societal risks, including embedding and outputting bias; generating fake news, illegal or harmful content and inadvertent “hallucinations”; infringing existing laws relating eg to copyright and privacy; as well as environmental, competition and workplace concerns.
Many nations are now considering regulation to address these worries, and can draw on a number of basic and hybrid models of governance. This paper canvasses models of mandatory comprehensive legislation (where the EU AI Act hopes to place itself as a gold standard model); vertical mandatory legislation (where China has quietly taken a lead); adapting existing law (see the many copyright lawsuits underway); and voluntary “soft law” such as codes of ethics, “blueprints”, or industry guidelines. Both the domestic and international regulatory scenes for AI are also increasingly politicised as the rise of "AI safety" hype shows. Against this backdrop what choices should smaller countries such as the UK and Australia make? will international harmonisation lead to a race to the top as with the GDPR, or the bottom - rule by tech for tech?
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/09/responsible-ai-tools-and-frameworks-for-developing-ai-solutions-a-presentation-from-intel/
Mrinal Karvir, Senior Cloud Software Engineering Manager at Intel, presents the “Responsible AI: Tools and Frameworks for Developing AI Solutions” tutorial at the May 2023 Embedded Vision Summit.
Over 90% of businesses using AI say trustworthy and explainable AI is critical to business, according to Morning Consult’s IBM Global AI Adoption Index 2021. If not designed with responsible considerations of fairness, transparency, preserving privacy, safety and security, AI systems can cause significant harm to people and society and result in financial and reputational damage for companies.
How can we take a human-centric approach to design AI solutions? How can we identify different types of bias and what tools can we use to mitigate those? What are model cards, and how can we use them to improve transparency? What tools can we use to preserve privacy and improve security? In this talk, Karvir discusses practical approaches to adoption of responsible AI principles. She highlights relevant tools and frameworks and explores industry case studies. She also discusses building a well-defined response plan to help address an AI incident efficiently.
Today, I will be presenting on the topic of
"Generative AI, responsible innovation, and the law."
Artificial Intelligence has been making rapid strides in recent years,
and its applications are becoming increasingly diverse.
Generative AI, in particular, has emerged as a promising area of innovation, the potential to create highly realistic and compelling outputs.
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Responsible AI & Cybersecurity: A tale of two technology risksLiming Zhu
With the broader adoption of digital technologies and AI, organisations face the emerging risks of AI, the unfamiliar, and the intensified risk of cybersecurity, the familiar. AI and cybersecurity are intertwined, but risk silos are often created when they are dealt with at the technology and governance levels. This talk will explore the interactions between responsible AI and cybersecurity risks via industry case studies. It will show how we can break down the risk silos and use emerging trust-enhancing technologies, architecture and end-to-end software engineering/DevOps practices to connect the two worlds and uplift the risk management posture for both.
A Framework for Navigating Generative Artificial Intelligence for EnterpriseRocketSource
Generative AI has dominated the headlines recently, which has caused many enterprises to put a full stop to implementing this technology until they can understand what’s behind the glitz and glamour. What if we shifted the conversation? What if the focus became a fresh, incremental approach to embracing the opportunities with generative artificial intelligence to keep organizations moving upward on the S Curve of Growth?
Brands stay relevant and solve complex problems by testing the barometer for one thing — will a new strategy, tool, or piece of technology improve humanity?
Human connections are more vital than using shiny new tools or technology. As your teams work to steer clear of the temptation to do what everyone else is doing in uniform, this post will highlight how to stand out, compete, and do so with less risk in today’s world of generative AI overload.
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
🔹How will AI-based content-generating tools change your mission and products?
🔹This complimentary webinar [ON-DEMAND] explores multiple use cases that drive adoption in their early adopter customer base to provide product leaders with insights into the future of generative AI-powered businesses, and the potential generative AI holds for driving innovation and improving business processes.
* "Responsible AI Leadership: A Global Summit on Generative AI"
*April 2023 guide for experts and policymakers
* Developing and governing generative AI systems
* + 100 thought leaders and practitioners participated
* Recommendations for responsible development, open innovation & social progress
* 30 action-oriented recommendations aim
* Navigate AI complexities
Talk presented at the Analytics Frontiers Conference in Charlotte on March 21. The presentation evaluates opportunities and risks of AI and how consumers, businesses, society and governments can mitigate some of the risks.
Artificial Intelligence Bill of Rights: Impacts on AI GovernanceTrustArc
Artificial Intelligence (AI) is increasingly being used to make decisions that impact individuals and society as a whole. As the use of AI continues to grow, there is a need to establish guidelines and regulations to ensure that it is being used responsibly and ethically.
In October 2022, the White House Office of Science and Technology Policy (OSTP) published a Blueprint for an AI Bill of Rights (“Blueprint”), which shared a nonbinding roadmap for the responsible use of artificial intelligence (AI). In this webinar, we will examine the key principles that underpin the bill, such as transparency, accountability, and fairness, and discuss how they can help ensure that the use of AI aligns with the values and rights of individuals.
For Reference watch my YouTube Video - https://youtu.be/NqvNFwa0hQc
Hey Everyone!
This is my complete talk in a virtual conference for cybersecurity researchers that has been hosted by Bsides Maharashtra and thanks to them that they provided me an opportunity to share my thoughts and knowledge with passionate and budding cybersecurity researchers, Hackers, Bug Hunters, and geeks. My talk is all about the detailed explanation of AI in Cyber Security and this should be listened to by every Cyber Sec Person who wants to learn about How AI Can Help In Cyber Security. I have explained the most and every basic to advance information. So do give it a look and understand the concepts and share as much as you can. Thank you Bsides Maharashtra for inviting me. I am happy and excited to be a part of your event.
If you want to invite me for a webinar or conference connect
mail: hello@priyanshuratnakar.com or priyanshuratnakar@protonmail.com
vent details
Date - 25th to 27th November 2020
CTF
Workshop
Speaker session
website - https://bsidesmaharashtra.com/
Security BSides is a community-driven framework for building events by and for information security community members. These events are already happening in major cities all over the world! We are responsible for organizing an independent BSides approved event for Delhi, India. We’re a volunteer organized event (we have no paid staff), and we truly strive to keep information accessible for everyone.
The idea behind the Security BSides Delhi is to organize an Information Security gathering where professionals, experts, researchers, and InfoSec enthusiasts come together to discuss. It creates opportunities for individuals to both presents and participates in an intimate atmosphere that encourages collaboration. It is an intense event with discussions, demos, and interaction from participants. It is where conversations for the next-big-thing are happening.
Feel free to use the slide but give credit somewhere :)
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
Contemporary AI engenders hopes and fears – hopes of harnessing AI for productivity growth and innovation – fears of mass unemployment and conflict between humankind and an artificial super-intelligence. Before we let AI drive our hopes and fears, we need to understand what it is and what it is not. Then we need to understand how to implement AI in an ethical and responsible manner. Only then can we harness the power of AI to our benefit.
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
This slide shows (1) AI and Accountability , (2) AI Ethics, (2) Privacy Protection. Several AI ethics documents such as IEEE EAD, EC-HELG Ethics Guideline for Trustworthy AI, Social Principles of Human-Centric AI(Japan), focus on AI's transparency, accountability and trust. We follow the discussions of these documents around the above (1),(2) and (3) topics.
The Future of Security: How Artificial Intelligence Will Impact UsPECB
For decades, the security profession has relied on the best technology we had at the time to deflect the onslaught of what we faced daily in the way of virus and malware attacks. Now, as predicted by Thomas Kuhn in his book “The Structure of Scientific Revolutions, we’re seeing the dawn of a new day where AI’s machine learning and advanced mathematical algorithms now offer validated deflection rates, pre-execution, in the realm of 99%. This session will explore this new paradigm and how it will impact our future.
Main points covered:
• How did our profession change in the world of reactive detection?
• How to escape the inertia that held us, prisoners?
• What is the power of AI and machine learning?
• What are the risks of this new technology?
Presenter:
Our presenter for this webinar, John McClurg serves as Vice President and Ambassador-At-Large of Cylance, where he is responsible for building Security and Trust programs & operational excellence efforts. Prior to Cylance, he served as the CSO of Dell, Honeywell, and Lucent and in the U.S. Intelligence Community, as a twice-decorated member of the Federal Bureau of Investigation (FBI). He also served as a Deputy Branch Chief of CIA where he helped to establish the new Counterespionage Group and was responsible for the management of complex counterespionage investigations. McClurg was voted one of America’s 25 most influential security professionals.
Organizer: Ardian Berisha
Date: October 25th, 2018
Recorded webinar link:
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
Fairness, Transparency, and Privacy in AI @LinkedInC4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2V9zW73.
Krishnaram Kenthapadi talks about privacy breaches, algorithmic bias/discrimination issues observed in the Internet industry, regulations & laws, and techniques for achieving privacy and fairness in data-driven systems. He focusses on the application of privacy-preserving data mining and fairness-aware ML techniques in practice, by presenting case studies spanning different LinkedIn applications. Filmed at qconsf.com.
Krishnaram Kenthapadi is part of the AI team at LinkedIn, where he leads the transparency and privacy modeling efforts across different applications. He is LinkedIn's representative in Microsoft's AI and Ethics in Engineering & Research Committee. He shaped the technical roadmap for LinkedIn Salary product, and served as the relevance lead for the LinkedIn Careers & Talent Solutions Relevance team.
Responsible Data Use in AI - core tech pillarsSofus Macskássy
In this deck, we cover four core pillars of responsible data use in AI, including fairness, transparency, explainability -- as well as data governance.
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Responsible AI & Cybersecurity: A tale of two technology risksLiming Zhu
With the broader adoption of digital technologies and AI, organisations face the emerging risks of AI, the unfamiliar, and the intensified risk of cybersecurity, the familiar. AI and cybersecurity are intertwined, but risk silos are often created when they are dealt with at the technology and governance levels. This talk will explore the interactions between responsible AI and cybersecurity risks via industry case studies. It will show how we can break down the risk silos and use emerging trust-enhancing technologies, architecture and end-to-end software engineering/DevOps practices to connect the two worlds and uplift the risk management posture for both.
A Framework for Navigating Generative Artificial Intelligence for EnterpriseRocketSource
Generative AI has dominated the headlines recently, which has caused many enterprises to put a full stop to implementing this technology until they can understand what’s behind the glitz and glamour. What if we shifted the conversation? What if the focus became a fresh, incremental approach to embracing the opportunities with generative artificial intelligence to keep organizations moving upward on the S Curve of Growth?
Brands stay relevant and solve complex problems by testing the barometer for one thing — will a new strategy, tool, or piece of technology improve humanity?
Human connections are more vital than using shiny new tools or technology. As your teams work to steer clear of the temptation to do what everyone else is doing in uniform, this post will highlight how to stand out, compete, and do so with less risk in today’s world of generative AI overload.
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
🔹How will AI-based content-generating tools change your mission and products?
🔹This complimentary webinar [ON-DEMAND] explores multiple use cases that drive adoption in their early adopter customer base to provide product leaders with insights into the future of generative AI-powered businesses, and the potential generative AI holds for driving innovation and improving business processes.
* "Responsible AI Leadership: A Global Summit on Generative AI"
*April 2023 guide for experts and policymakers
* Developing and governing generative AI systems
* + 100 thought leaders and practitioners participated
* Recommendations for responsible development, open innovation & social progress
* 30 action-oriented recommendations aim
* Navigate AI complexities
Talk presented at the Analytics Frontiers Conference in Charlotte on March 21. The presentation evaluates opportunities and risks of AI and how consumers, businesses, society and governments can mitigate some of the risks.
Artificial Intelligence Bill of Rights: Impacts on AI GovernanceTrustArc
Artificial Intelligence (AI) is increasingly being used to make decisions that impact individuals and society as a whole. As the use of AI continues to grow, there is a need to establish guidelines and regulations to ensure that it is being used responsibly and ethically.
In October 2022, the White House Office of Science and Technology Policy (OSTP) published a Blueprint for an AI Bill of Rights (“Blueprint”), which shared a nonbinding roadmap for the responsible use of artificial intelligence (AI). In this webinar, we will examine the key principles that underpin the bill, such as transparency, accountability, and fairness, and discuss how they can help ensure that the use of AI aligns with the values and rights of individuals.
For Reference watch my YouTube Video - https://youtu.be/NqvNFwa0hQc
Hey Everyone!
This is my complete talk in a virtual conference for cybersecurity researchers that has been hosted by Bsides Maharashtra and thanks to them that they provided me an opportunity to share my thoughts and knowledge with passionate and budding cybersecurity researchers, Hackers, Bug Hunters, and geeks. My talk is all about the detailed explanation of AI in Cyber Security and this should be listened to by every Cyber Sec Person who wants to learn about How AI Can Help In Cyber Security. I have explained the most and every basic to advance information. So do give it a look and understand the concepts and share as much as you can. Thank you Bsides Maharashtra for inviting me. I am happy and excited to be a part of your event.
If you want to invite me for a webinar or conference connect
mail: hello@priyanshuratnakar.com or priyanshuratnakar@protonmail.com
vent details
Date - 25th to 27th November 2020
CTF
Workshop
Speaker session
website - https://bsidesmaharashtra.com/
Security BSides is a community-driven framework for building events by and for information security community members. These events are already happening in major cities all over the world! We are responsible for organizing an independent BSides approved event for Delhi, India. We’re a volunteer organized event (we have no paid staff), and we truly strive to keep information accessible for everyone.
The idea behind the Security BSides Delhi is to organize an Information Security gathering where professionals, experts, researchers, and InfoSec enthusiasts come together to discuss. It creates opportunities for individuals to both presents and participates in an intimate atmosphere that encourages collaboration. It is an intense event with discussions, demos, and interaction from participants. It is where conversations for the next-big-thing are happening.
Feel free to use the slide but give credit somewhere :)
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
Contemporary AI engenders hopes and fears – hopes of harnessing AI for productivity growth and innovation – fears of mass unemployment and conflict between humankind and an artificial super-intelligence. Before we let AI drive our hopes and fears, we need to understand what it is and what it is not. Then we need to understand how to implement AI in an ethical and responsible manner. Only then can we harness the power of AI to our benefit.
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
This slide shows (1) AI and Accountability , (2) AI Ethics, (2) Privacy Protection. Several AI ethics documents such as IEEE EAD, EC-HELG Ethics Guideline for Trustworthy AI, Social Principles of Human-Centric AI(Japan), focus on AI's transparency, accountability and trust. We follow the discussions of these documents around the above (1),(2) and (3) topics.
The Future of Security: How Artificial Intelligence Will Impact UsPECB
For decades, the security profession has relied on the best technology we had at the time to deflect the onslaught of what we faced daily in the way of virus and malware attacks. Now, as predicted by Thomas Kuhn in his book “The Structure of Scientific Revolutions, we’re seeing the dawn of a new day where AI’s machine learning and advanced mathematical algorithms now offer validated deflection rates, pre-execution, in the realm of 99%. This session will explore this new paradigm and how it will impact our future.
Main points covered:
• How did our profession change in the world of reactive detection?
• How to escape the inertia that held us, prisoners?
• What is the power of AI and machine learning?
• What are the risks of this new technology?
Presenter:
Our presenter for this webinar, John McClurg serves as Vice President and Ambassador-At-Large of Cylance, where he is responsible for building Security and Trust programs & operational excellence efforts. Prior to Cylance, he served as the CSO of Dell, Honeywell, and Lucent and in the U.S. Intelligence Community, as a twice-decorated member of the Federal Bureau of Investigation (FBI). He also served as a Deputy Branch Chief of CIA where he helped to establish the new Counterespionage Group and was responsible for the management of complex counterespionage investigations. McClurg was voted one of America’s 25 most influential security professionals.
Organizer: Ardian Berisha
Date: October 25th, 2018
Recorded webinar link:
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
Fairness, Transparency, and Privacy in AI @LinkedInC4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2V9zW73.
Krishnaram Kenthapadi talks about privacy breaches, algorithmic bias/discrimination issues observed in the Internet industry, regulations & laws, and techniques for achieving privacy and fairness in data-driven systems. He focusses on the application of privacy-preserving data mining and fairness-aware ML techniques in practice, by presenting case studies spanning different LinkedIn applications. Filmed at qconsf.com.
Krishnaram Kenthapadi is part of the AI team at LinkedIn, where he leads the transparency and privacy modeling efforts across different applications. He is LinkedIn's representative in Microsoft's AI and Ethics in Engineering & Research Committee. He shaped the technical roadmap for LinkedIn Salary product, and served as the relevance lead for the LinkedIn Careers & Talent Solutions Relevance team.
Responsible Data Use in AI - core tech pillarsSofus Macskássy
In this deck, we cover four core pillars of responsible data use in AI, including fairness, transparency, explainability -- as well as data governance.
How do we protect privacy of users in large-scale systems? How do we ensure fairness and transparency when developing machine learned models? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical and legal challenges encountered by researchers and practitioners alike. In this talk (presented at QConSF 2018), we first present an overview of privacy breaches as well as algorithmic bias / discrimination issues observed in the Internet industry over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving privacy and fairness in data-driven systems. We motivate the need for adopting a "privacy and fairness by design" approach when developing data-driven AI/ML models and systems for different consumer and enterprise applications. We also focus on the application of privacy-preserving data mining and fairness-aware machine learning techniques in practice, by presenting case studies spanning different LinkedIn applications, and conclude with the key takeaways and open challenges.
We have critically evaluated how AI will shape integration use cases, their feasibility, and timelines. Emerging Technology Analysis Canvas (ETAC), a framework built to analyze emerging technologies, is the methodology of our study.
We observe that AI can significantly impact integration use cases and identify 13 AI-based use case classes for integration. Points to note include:
Enabling AI in an enterprise involves collecting, cleaning up, and creating a single representation of data as well as enforcing decisions and exposing data outside, each of which leads to many integration use cases. Hence, AI indirectly creates demand for integration.
AI needs data, which in some cases lead to significant competitive advantages. The need to collect data would drive vendors to offer most AI products in the cloud through APIs.
Due to lack of expertise and data, custom AI model building will be limited to large organizations. It is hard for small and medium size organization to build and maintain custom models.
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
Fairness, robustness, and explainability in AI are some of the key cornerstones of trustworthy AI. Through its open source projects, IBM and IBM Research bring together the developer, data science and research community to accelerate the pace of innovation and instrument trust into AI.
AI Cybersecurity: Pros & Cons. AI is reshaping cybersecurityTasnim Alasali
Discover how AI is reshaping cybersecurity. This presentation delves into AI's role in enhancing threat detection, the balance of innovation and risk, and the strategies shaping the future of digital defense.
Crowdsourcing & ethics: a few thoughts and refences. Matthew Lease
Extracts and addendums from an earlier talk, for those interested in ethics and related issues in regard to crowdsourcing, particularly research uses. Slides updated Sept. 2, 2013.
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptxISSIP
20240103 HICSS Panel
Ethical and legal implications raised by Generative AI and Augmented Reality in the workplace.
Souren Paul - https://www.linkedin.com/in/souren-paul-a3bbaa5/
Event: https://kmeducationhub.de/hawaii-international-conference-on-system-sciences-hicss/
Machine learning and ai in a brave new cloud worldUlf Mattsson
Machine learning platforms are one of the fastest growing services of the public cloud. ML, an approach and set of technologies that use Artificial Intelligence (AI) concepts, is directly related to pattern recognition and computational learning. Early adopters of AI have now rolled out cloud-based services that are bringing AI to the masses.
How are AI, deep learning, machine learning, big data, and cloud related? Can machine learning algorithms enable the use of an individual’s comprehensive biological information to predict or diagnose diseases, and to find or develop the best therapy for that individual? How is Quantum Computing in the Cloud related to the use of AI and Cybersecurity?
Join this webinar to learn more about:
- Machine Learning, Data Discovery and Cloud
- Cloud-Based ML Applications and ML services from AWS and Google Cloud
- How to Automate Machine Learning
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...James Anderson
GDG Cloud Southlake #28: Brad Taylor and Shawn Augenstein: Old Problems in the New Frontiers of AI
• Brad discusses how decades-old laws and expanding regulation have new implications in the ML and Large Model age, and will touch on:
• Legal and Regulatory: Data usage rights, cautionary tale of stability.ai and Getty Images, EU's planned expansion of GDPR re models
• How Neural Networks, zero and one-shot learning, and LLMs have increased the need for better data governance, lineage management
• Shawn speaks on the coming "Data Renaissance"
• The New IP: Prompts and Internatl Interaction Data
• Where GenAI can be used right now and where it maybe shouldn't be used yet
• The Power of the Diversity of Insight
• What is making the future look bright!
Brad has been an intrapreneur and entrepreneur in data, AI, and IoT and has led teams in the creation of NLP, data products and predictive analytics for retention, churn, driver safety, traffic, CX and fleet risk. He has built solutions on global hyperscalers GCP, AWS, Azure, and IBM. Brad is a former founding partner at Tech Wildcatters, and worked with dozens of mobile, SaaS and AI start-ups, many of which became both job creators and profitable exits for TW investors. He is currently a Senior Manager in Pepsico's global Strategy and Transformation group, where he focuses on delivering AI/ML driven solutions.
Shawn Augenstein is a dynamic and highly experienced professional, who is driven by educating, providing equal access to technology and equitable access to information. Currently, Shawn serves as Principal Data & AI Consultant at CDW, where he develops the curriculum and architectures for understanding and furthering the use of AI, as well as developing solutions for both partners and clients. In his spare time, he enjoys exploring new frontiers of Diffusers, capturing moments through photography, and listening to music as a passionate melophile.
This talk explores the basics of AI and machine learning from an application point of view. We run through basic definitions and examples. Then we talk about management of AI/ML projects.
Threat Hunting, Detection, and Incident Response in the CloudBen Johnson
SaaS and IaaS are new frontiers for a lot of security teams. We'll explore some thoughts at how you might approach some of these areas of your environment from a hunting or IR perspective. This was from a Sans webinar on 2019-09-25.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Amazon SageMaker Clarify (https://aws.amazon.com/sagemaker/clarify/) provides machine learning developers with greater visibility into their training data and models so they can identify and limit bias and explain predictions. SageMaker Clarify detects potential bias during data preparation, after model training, and in your deployed model by examining attributes you specify. For instance, you can check for bias related to age in your initial dataset or in your trained model and receive a detailed report that quantifies different types of possible bias. SageMaker Clarify also includes feature importance graphs that help you explain model predictions and produces reports which can be used to support internal presentations or to identify issues with your model that you can take steps to correct.
For more information on Amazon SageMaker Clarify, please refer these links: (1) https://aws.amazon.com/sagemaker/clarify (2) https://aws.amazon.com/blogs/aws/new-amazon-sagemaker-clarify-detects-bias-and-increases-the-transparency-of-machine-learning-models (3) https://github.com/aws/amazon-sagemaker-clarify (4) Discussion and demo: https://youtu.be/cQo2ew0DQw0
Acknowledgments: Amazon SageMaker Clarify core team, Amazon AWS AI team, and partners across Amazon
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, as well as critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI from societal, legal, customer/end-user, and model developer perspectives. [Note: Due to time constraints, we will not focus on techniques/tools for providing explainability as part of AI/ML systems.] Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges / implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the research community.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we present open problems and research directions for the data mining / machine learning community.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Preserving privacy of users is a key requirement of web-scale data mining applications and systems such as web search, recommender systems, crowdsourced platforms, and analytics applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR. In this tutorial, we will first present an overview of privacy breaches over the last two decades and the lessons learned, key regulations and laws, and evolution of privacy techniques leading to differential privacy definition / techniques. Then, we will focus on the application of privacy-preserving data mining techniques in practice, by presenting case studies such as Apple's differential privacy deployment for iOS / macOS, Google's RAPPOR, LinkedIn Salary, and Microsoft's differential privacy deployment for collecting Windows telemetry. We will conclude with open problems and challenges for the data mining / machine learning community, based on our experiences in industry.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)Krishnaram Kenthapadi
Preserving privacy of users is a key requirement of web-scale data mining applications and systems such as web search, recommender systems, crowdsourced platforms, and analytics applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR. In this tutorial, we will first present an overview of privacy breaches over the last two decades and the lessons learned, key regulations and laws, and evolution of privacy techniques leading to differential privacy definition / techniques. Then, we will focus on the application of privacy-preserving data mining techniques in practice, by presenting case studies such as Apple's differential privacy deployment for iOS / macOS, Google's RAPPOR, LinkedIn Salary, and Microsoft's differential privacy deployment for collecting Windows telemetry. We will conclude with open problems and challenges for the data mining / machine learning community, based on our experiences in industry.
This talk provides an overview of privacy-preserving analytics and data mining systems at LinkedIn, highlighting the practical challenges/requirements, techniques, and lessons learned from deployment. The first part presents a framework to compute robust, privacy-preserving analytics, while the second part focuses on the privacy challenges/design for a large crowdsourced system (LinkedIn Salary). This presentation is an expanded version of the talk given at the Differential Privacy Deployed workshop, co-organized by Cynthia Dwork and held at Harvard / American Academy of Sciences in September, 2018.
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...Krishnaram Kenthapadi
Preserving privacy of users is a key requirement of web-scale data mining applications and systems such as web search, recommender systems, crowdsourced platforms, and analytics applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR. In this tutorial, we will first present an overview of privacy breaches over the last two decades and the lessons learned, key regulations and laws, and evolution of privacy techniques leading to differential privacy definition / techniques. Then, we will focus on the application of privacy-preserving data mining techniques in practice, by presenting case studies such as Apple’s differential privacy deployment for iOS, Google’s RAPPOR, and LinkedIn Salary. We will also discuss various open source as well as commercial privacy tools, and conclude with open problems and challenges for data mining / machine learning community.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
ER(Entity Relationship) Diagram for online shopping - TAEHimani415946
https://bit.ly/3KACoyV
The ER diagram for the project is the foundation for the building of the database of the project. The properties, datatypes, and attributes are defined by the ER diagram.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
2. What is Privacy?
• Right of/to privacy
• “Right to be let alone” [L. Brandeis & S. Warren, 1890]
• “No one shall be subjected to arbitrary interference with [their] privacy,
family, home or correspondence, nor to attacks upon [their] honor and
reputation.” [The United Nations Universal Declaration of Human Rights]
• “The right of a person to be free from intrusion into or publicity concerning
matters of a personal nature” [Merriam-Webster]
• “The right not to have one's personal matters disclosed or publicized; the
right to be left alone” [Nolo’s Plain-English Law Dictionary]
3. Data Privacy (or Information Privacy)
• “The right to have some control over how your personal information is
collected and used” [IAPP]
• “Privacy has fast-emerged as perhaps the most significant consumer
protection issue—if not citizen protection issue—in the global
information economy” [IAPP]
4. Data Privacy vs. Security
• Data privacy: use & governance of personal data
• Data security: protecting data from malicious attacks & the exploitation
of stolen data for profit
• Security is necessary, but not sufficient for addressing privacy.
5. Data Privacy:Technical Problem
Given a dataset with sensitive personal information, how can we compute
and release functions of the dataset while protecting individual privacy?
Credit: Kobbi Nissim
6. Massachusetts Group
Insurance Commission
(1997): Anonymized
medical history of state
employees
William Weld vs
Latanya Sweeney
Latanya Sweeney (MIT
grad student): $20 –
Cambridge voter roll
born July 31, 1945
resident of 02138
7. 64%Uniquely identifiable with ZIP
+ birth date + gender (in the
US population)
Golle, “Revisiting the Uniqueness of Simple Demographics in the US Population”, WPES 2006
8. A History of Privacy Failures …
Credit: Kobbi Nissim,Or Sheffet
9. Lessons Learned …
• Attacker’s advantage: Auxiliary information; high dimensionality;
enough to succeed on a small fraction of inputs; active; observant …
• Unanticipated privacy failures from new attack methods
• Need for rigorous privacy notions & techniques
10.
11. • Ethical challenges
posed by AI systems
• Inherent biases present
in society
• Reflected in training
data
• AI/ML models prone to
amplifying such biases
Algorithmic Bias
12. Laws against Discrimination
Immigration Reform and Control Act
Citizenship
Rehabilitation Act of 1973;
Americans with Disabilities Act
of 1990
Disability status
Civil Rights Act of 1964
Race
Age Discrimination in Employment Act of
1967
Age
Equal Pay Act of 1963;
Civil Rights Act of 1964
Sex
And more...
14. Motivation & Business Opportunities
• Regulatory. We need to understand why the ML model made a given
decision and also whether the decision it made was free from bias, both
in training and at inference
• Business. Providing explanations to internal teams (loan officers,
customer service rep, forecasting teams) and end users/customers
• Data Science. Improving models, understanding whether a model is
making inferences based on irrelevant data, etc.
16. LinkedIn operates the largest professional
network on the Internet
Tell your
story
645M+ members
30M+
companies are
represented
on LinkedIn
90K+
schools listed
(high school &
college)
35K+
skills listed
20M+
open jobs
on
LinkedIn
Jobs
280B
Feed updates
20. Threat Models
User Access Only
• Users store their
data
• Noisy data or
analytics transmitted
Trusted Curator
• Stored by organization
• Managed only by a
trusted curator/admin
• Access only to noisy
analytics or synthetic
data
External Threat
• Stored by organization
• Organization has
access
• Only privacy enabled
models deployed
22. Analytics & Reporting Products at LinkedIn
Profile View
Analytics
23
Content
Analytics
Ad Campaign
Analytics
All showing
demographics of
members engaging with
the product
23. Admit only a small # of predetermined query types
Querying for the number of member actions, for a specified time period,
together with the top demographic breakdowns
Analytics & Reporting Products at LinkedIn
24. Admit only a small # of predetermined query types
Querying for the number of member actions, for a specified time period,
together with the top demographic breakdowns
Analytics & Reporting Products at LinkedIn
E.g., Title = “Senior
Director”
E.g., Clicks on a
given ad
25. Privacy Requirements
Attacker cannot infer whether a member performed an action
E.g., click on an article or an ad
Attacker may use auxiliary knowledge
E.g., knowledge of attributes associated with the target member (say,
obtained from this member’s LinkedIn profile)
E.g., knowledge of all other members that performed similar action (say, by
creating fake accounts)
26. Possible Privacy Attacks
27
Targeting:
Senior directors in US, who studied at Cornell
Matches ~16k LinkedIn members
→ over minimum targeting threshold
Demographic breakdown:
Company = X
May match exactly one person
→ can determine whether the person
clicks on the ad or not
Require minimum reporting threshold
Attacker could create fake profiles!
E.g. if threshold is 10, create 9 fake profiles
that all click.
Rounding mechanism
E.g., report incremental of 10
Still amenable to attacks
E.g. using incremental counts over time to
infer individuals’ actions
Need rigorous techniques to preserve member privacy
(not reveal exact aggregate counts)
31. Differential Privacy
32
Databases D and D′ are neighbors if they differ in one person’s data.
Differential Privacy: The distribution of the curator’s output M(D) on database
D is (nearly) the same as M(D′).
Curator
+ your data
- your data
Dwork, McSherry, Nissim, Smith [TCC 2006]
Curator
32. (ε, 𝛿)-Differential Privacy: The distribution of the curator’s output M(D) on
database D is (nearly) the same as M(D′).
Differential Privacy
33
Curator
Parameter ε quantifies
information leakage
∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S]+𝛿.Curator
Parameter 𝛿 gives
some slack
Dwork, Kenthapadi, McSherry, Mironov, Naor [EUROCRYPT 2006]
+ your data
- your data
Dwork, McSherry, Nissim, Smith [TCC 2006]
33. Differential Privacy: Random Noise Addition
If ℓ1-sensitivity of f : D → ℝn:
maxD,D′ ||f(D) − f(D′)||1 = s,
then adding Laplacian noise to true output
f(D) + Laplacen(s/ε)
offers (ε,0)-differential privacy.
Dwork, McSherry, Nissim, Smith [TCC 2006]
34. PriPeARL: A Framework for Privacy-Preserving Analytics
K. Kenthapadi, T. T. L. Tran, ACM CIKM 2018
35
Pseudo-random noise generation, inspired by differential privacy
● Entity id (e.g., ad
creative/campaign/account)
● Demographic dimension
● Stat type (impressions, clicks)
● Time range
● Fixed secret seed
Uniformly Random
Fraction
● Cryptographic
hash
● Normalize to
(0,1)
Random
Noise
Laplace
Noise
● Fixed ε
True
Count
Noisy
Count
To satisfy consistency
requirements
● Pseudo-random noise → same query has same result over time, avoid
averaging attack.
● For non-canonical queries (e.g., time ranges, aggregate multiple entities)
○ Use the hierarchy and partition into canonical queries
○ Compute noise for each canonical queries and sum up the noisy
counts
36. Lessons Learned from Deployment (> 1
year)
Semantic consistency vs. unbiased, unrounded noise
Suppression of small counts
Online computation and performance requirements
Scaling across analytics applications
Tools for ease of adoption (code/API library, hands-on how-to tutorial) help!
Having a few entry points (all analytics apps built over Pinot) wider adoption
37. Summary
Framework to compute robust, privacy-preserving analytics
Addressing challenges such as preserving member privacy, product
coverage, utility, and data consistency
Future
Utility maximization problem given constraints on the ‘privacy loss budget’
per user
E.g., noise with larger variance to impressions but less noise to clicks (or conversions)
E.g., more noise to broader time range sub-queries and less noise to granular time
range sub-queries
Reference: K. Kenthapadi, T. Tran, PriPeARL: A Framework for Privacy-
Preserving Analytics and Reporting at LinkedIn, ACM CIKM 2018.
38. Acknowledgements
Team:
AI/ML: Krishnaram Kenthapadi, Thanh T. L. Tran
Ad Analytics Product & Engineering: Mark Dietz, Taylor Greason, Ian
Koeppe
Legal / Security: Sara Harrington, Sharon Lee, Rohit Pitke
Acknowledgements
Deepak Agarwal, Igor Perisic, Arun Swami
41. Data Privacy Challenges
Minimize the risk of inferring any one
individual’s compensation data
Protection against data breach
No single point of failure
42. Problem Statement
How do we design LinkedIn Salary system taking into
account the unique privacy and security challenges,
while addressing the product requirements?
K. Kenthapadi, A. Chudhary, and
S. Ambler, LinkedIn Salary: A
System for Secure Collection and
Presentation of Structured
Compensation Insights to Job
Seekers, IEEE PAC 2017
(arxiv.org/abs/1705.06976)
43. Title Region
$$
User Exp
Designer
SF Bay
Area
100K
User Exp
Designer
SF Bay
Area
115K
... ...
...
Title Region
$$
User Exp
Designer
SF Bay
Area
100K
De-identification Example
Title Region Company Industry Years of
exp
Degree FoS Skills
$$
User Exp
Designer
SF Bay
Area
Google Internet 12 BS Interactive
Media
UX,
Graphics,
...
100K
Title Region Industry
$$
User Exp
Designer
SF Bay
Area
Internet
100K
Title Region Years of
exp $$
User Exp
Designer
SF Bay
Area
10+
100K
Title Region Company Years of
exp $$
User Exp
Designer
SF Bay
Area
Google 10+
100K
#data
points >
threshold?
Yes ⇒ Copy to
Hadoop (HDFS)
Note: Original submission stored as encrypted objects.
45. Acknowledgements
Team:
AI/ML: Krishnaram Kenthapadi, Stuart Ambler, Xi Chen, Yiqun Liu, Parul
Jain, Liang Zhang, Ganesh Venkataraman, Tim Converse, Deepak Agarwal
Application Engineering: Ahsan Chudhary, Alan Yang, Alex Navasardyan,
Brandyn Bennett, Hrishikesh S, Jim Tao, Juan Pablo Lomeli Diaz, Patrick
Schutz, Ricky Yan, Lu Zheng, Stephanie Chou, Joseph Florencio, Santosh
Kumar Kancha, Anthony Duerr
Product: Ryan Sandler, Keren Baruch
Other teams (UED, Marketing, BizOps, Analytics, Testing, Voice of
Members, Security, …): Julie Kuang, Phil Bunge, Prateek Janardhan, Fiona
Li, Bharath Shetty, Sunil Mahadeshwar, Cory Scott, Tushar Dalvi, and team
Acknowledgements
David Freeman, Ashish Gupta, David Hardtke, Rong Rong, Ram
46. Privacy Research @ Amazon -
Sampler
Work done by Oluwaseyi Feyisetan, Tom Diethe, Thomas Drake, Borja Belle
47. Simple but effective, privacy-preserving mechanism
Task: subsample from dataset using additional information in privacy-
preserving way.
Building on existing exponential analysis of k-anonymity, amplified by
sampling…
Mechanism M is (β, ε, δ)-differentially private
Model uncertainty via Bayesian NN
”Privacy-preserving Active Learning on Sensitive Data for User Intent
Classification” [Feyisetan, Balle, Diethe, Drake; PAL 2019]
48. Differentially-private text redaction
Task: automatically redact sensitive text for privatizing various ML models.
Perturb sentences but maintain meaning
e.g. “goalie wore a hockey helmet” “keeper wear the nhl hat”
Apply metric DP and analysis of word embeddings to scramble sentences
Mechanism M is d χ – differentially private
Establish plausible deniability statistics:
Nw := Pr[M(w ) = w ]
Sw := Expected number of words output by M(w)
“Privacy- and Utility-Preserving Textual Analysis via Calibrated Multivariate
Perturbations” [Feyisetan, Drake, Diethe, Balle; WSDM 2020]
49. Analysis of DP redaction
Show plausible deniability via dist. of Nw & Sw for ε:
ε 0 : Nw decreases, Sw increases
ε inf : Nw increases, Sw decreases.
Impact of accuracy given ε (epsilon) on
multi-class classification and question
answering tasks, respectively:
50. Improving data utility of DP text redaction
Task: redact text, but use additional structured information to
better preserve utility.
Can we improve redaction for models that fail for extraneous words?
~Recall-sensitive
Extend d χ privacy to hyperbolic embeddings [Tifrea 2018] via
Hyperbolic: utilize high-dimensional geometry to infuse embeddings
with graph structure
E.g. uni- or bi-directional syllogisms from WebIsADb
New privacy analysis of Poincaré model and sampling procedure
Mechanism takes advantage of density in data to apply
perturbations more precisely.
“Leveraging Hierarchical Representations for Preserving Privacy
and Utility in Text” Feyisetan, Drake, Diethe; ICDM 2019
Tiling in Poincaré disk
Hyperbolic Glove emb.
projected into B2 Poincaré disk
51. Analysis of Hyperbolic redaction
New method improves over
privacy and utility because
of ability to encode
meaningful structure in
embeddings.
Accuracy scores on classification tasks. * indicates results better than 1 baseline, ** better than 2
baselines
Plausible deniability stat Nw (Pr[M(w ) = w) improved.
54. Fairness in ML
Application specific challenges
Conversational AI systems: Unique bias/fairness/ethics considerations
E.g., Hate speech, Complex failure modes
Beyond protected categories, e.g., accent, dialect
Entire ecosystem (e.g., including apps such as Alexa skills)
Two-sided markets: e.g., fairness to buyers and to sellers, or to content
consumers and producers
Fairness in advertising (externalities)
Tools for ensuring fairness (measuring & mitigating bias) in AI lifecycle
Pre-processing (representative datasets; modifying features/labels)
ML model training with fairness constraints
Post-processing
Experimentation & Post-deployment
55. Explainability in ML
Actionable explanations
Balance between explanations & model secrecy
Robustness of explanations to failure modes (Interaction between ML
components)
Application-specific challenges
Conversational AI systems: contextual explanations
Gradation of explanations
Tools for explanations across AI lifecycle
Pre & post-deployment for ML models
Model developer vs. End user focused
56. Privacy in ML
Privacy for highly sensitive data: model training & analytics using
secure enclaves, homomorphic encryption, federated learning / on-
device learning, or a hybrid
Privacy-preserving model training, robust against adversarial
membership inference attacks (Dynamic settings + Complex data /
model pipelines)
Privacy-preserving mechanisms for data marketplaces
58. Acknowledgements
Amazon AWS AI team
Special thanks to Sergul Aydore, Satadal Bhattacharjee, William Brown, Sanjiv Das, Jason Gelman,
Kevin Haas, Tyler Hill, Michael Kearns, Jalaja Kurubarahalli, Andrea Olgiati, Luca Melis, Aaron Roth,
Sudipta Sengupta, Ankit Siva