Luis Beltrán discusses building responsible AI models in Azure Machine Learning. Responsible AI is developing AI systems safely, reliably, and ethically by upholding principles like privacy and fairness. For privacy, differential privacy adds noise so any individual has limited impact on analysis outcomes. For fairness, algorithms like Exponentiated Gradient apply constraints to reduce disparities across demographic groups for metrics like true positive rate. The talk provides an overview of responsible AI principles and techniques for mitigating issues like unfairness in models.
Generative AI models, such as ChatGPT and Stable Diffusion, can create new and original content like text, images, video, audio, or other data from simple prompts, as well as handle complex dialogs and reason about problems with or without images. These models are disrupting traditional technologies, from search and content creation to automation and problem solving, and are fundamentally shaping the future user interface to computing devices. Generative AI can apply broadly across industries, providing significant enhancements for utility, productivity, and entertainment. As generative AI adoption grows at record-setting speeds and computing demands increase, on-device and hybrid processing are more important than ever. Just like traditional computing evolved from mainframes to today’s mix of cloud and edge devices, AI processing will be distributed between them for AI to scale and reach its full potential.
In this presentation you’ll learn about:
- Why on-device AI is key
- Full-stack AI optimizations to make on-device AI possible and efficient
- Advanced techniques like quantization, distillation, and speculative decoding
- How generative AI models can be run on device and examples of some running now
- Qualcomm Technologies’ role in scaling on-device generative AI
Exploring Opportunities in the Generative AI Value Chain.pdfDung Hoang
The article "Exploring Opportunities in the Generative AI Value Chain" by McKinsey & Company's QuantumBlack provides insights into the value created by generative artificial intelligence (AI) and its potential applications.
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
This document provides an agenda and overview for an MLOps workshop hosted by Amazon Web Services. The agenda includes introductions to Amazon AI, MLOps, Amazon SageMaker, machine learning pipelines, and a hands-on exercise to build an MLOps pipeline. It discusses key concepts like personas in MLOps, the CRISP-DM process, microservices deployment, and challenges of MLOps. It also provides overviews of Amazon SageMaker for machine learning and AWS services for continuous integration/delivery.
MLOps – Applying DevOps to Competitive AdvantageDATAVERSITY
MLOps is a practice for collaboration between Data Science and operations to manage the production machine learning (ML) lifecycles. As an amalgamation of “machine learning” and “operations,” MLOps applies DevOps principles to ML delivery, enabling the delivery of ML-based innovation at scale to result in:
Faster time to market of ML-based solutions
More rapid rate of experimentation, driving innovation
Assurance of quality, trustworthiness, and ethical AI
MLOps is essential for scaling ML. Without it, enterprises risk struggling with costly overhead and stalled progress. Several vendors have emerged with offerings to support MLOps: the major offerings are Microsoft Azure ML and Google Vertex AI. We looked at these offerings from the perspective of enterprise features and time-to-value.
A Framework for Navigating Generative Artificial Intelligence for EnterpriseRocketSource
Generative AI offers both opportunities and risks for enterprises. While it could drive significant ROI through personalized experiences, thought leadership, and faster processes, there are also concerns about job losses, overreliance on automation without oversight, and inaccurate information. Effective adoption of generative AI requires experience management strategies like understanding emotional and logical customer triggers, aligning products and services to experience channels, and building a business model around a compelling brand story. A people-first approach is important to maximize benefits and mitigate risks.
Gartner provides webinars on various topics related to technology. This webinar discusses generative AI, which refers to AI techniques that can generate new unique artifacts like text, images, code, and more based on training data. The webinar covers several topics related to generative AI, including its use in novel molecule discovery, AI avatars, and automated content generation. It provides examples of how generative AI can benefit various industries and recommendations for organizations looking to utilize this emerging technology.
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
Generative AI models, such as ChatGPT and Stable Diffusion, can create new and original content like text, images, video, audio, or other data from simple prompts, as well as handle complex dialogs and reason about problems with or without images. These models are disrupting traditional technologies, from search and content creation to automation and problem solving, and are fundamentally shaping the future user interface to computing devices. Generative AI can apply broadly across industries, providing significant enhancements for utility, productivity, and entertainment. As generative AI adoption grows at record-setting speeds and computing demands increase, on-device and hybrid processing are more important than ever. Just like traditional computing evolved from mainframes to today’s mix of cloud and edge devices, AI processing will be distributed between them for AI to scale and reach its full potential.
In this presentation you’ll learn about:
- Why on-device AI is key
- Full-stack AI optimizations to make on-device AI possible and efficient
- Advanced techniques like quantization, distillation, and speculative decoding
- How generative AI models can be run on device and examples of some running now
- Qualcomm Technologies’ role in scaling on-device generative AI
Exploring Opportunities in the Generative AI Value Chain.pdfDung Hoang
The article "Exploring Opportunities in the Generative AI Value Chain" by McKinsey & Company's QuantumBlack provides insights into the value created by generative artificial intelligence (AI) and its potential applications.
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
This document provides an agenda and overview for an MLOps workshop hosted by Amazon Web Services. The agenda includes introductions to Amazon AI, MLOps, Amazon SageMaker, machine learning pipelines, and a hands-on exercise to build an MLOps pipeline. It discusses key concepts like personas in MLOps, the CRISP-DM process, microservices deployment, and challenges of MLOps. It also provides overviews of Amazon SageMaker for machine learning and AWS services for continuous integration/delivery.
MLOps – Applying DevOps to Competitive AdvantageDATAVERSITY
MLOps is a practice for collaboration between Data Science and operations to manage the production machine learning (ML) lifecycles. As an amalgamation of “machine learning” and “operations,” MLOps applies DevOps principles to ML delivery, enabling the delivery of ML-based innovation at scale to result in:
Faster time to market of ML-based solutions
More rapid rate of experimentation, driving innovation
Assurance of quality, trustworthiness, and ethical AI
MLOps is essential for scaling ML. Without it, enterprises risk struggling with costly overhead and stalled progress. Several vendors have emerged with offerings to support MLOps: the major offerings are Microsoft Azure ML and Google Vertex AI. We looked at these offerings from the perspective of enterprise features and time-to-value.
A Framework for Navigating Generative Artificial Intelligence for EnterpriseRocketSource
Generative AI offers both opportunities and risks for enterprises. While it could drive significant ROI through personalized experiences, thought leadership, and faster processes, there are also concerns about job losses, overreliance on automation without oversight, and inaccurate information. Effective adoption of generative AI requires experience management strategies like understanding emotional and logical customer triggers, aligning products and services to experience channels, and building a business model around a compelling brand story. A people-first approach is important to maximize benefits and mitigate risks.
Gartner provides webinars on various topics related to technology. This webinar discusses generative AI, which refers to AI techniques that can generate new unique artifacts like text, images, code, and more based on training data. The webinar covers several topics related to generative AI, including its use in novel molecule discovery, AI avatars, and automated content generation. It provides examples of how generative AI can benefit various industries and recommendations for organizations looking to utilize this emerging technology.
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
In this talk, we will present an overview of Azure Machine Learning, a fully managed cloud service that enables you to easily build, deploy, and share predictive analytics solutions. We will start with the basics of machine learning and end with a demo that uses real world data.
Here is a draft email:
Subject: Automate key processes in automotive manufacturing with UiPath
Dear Tom,
My name is Ed Challis from UiPath. I understand from our mutual connection that you are the Automation Program Manager at BMW, focusing on implementing robotic process automation (RPA).
I wanted to share how some of our automotive manufacturing customers are leveraging UiPath to drive efficiencies in their operations. Specifically:
Quality inspection automation: One customer automated visual inspections on the production line to reduce defects and speed up issue resolution. This helped improve quality standards.
Supply chain management: Another customer automated PO matching, invoice processing and inventory management across their suppliers globally. This
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
Generative AI Use cases for Enterprise - Second SessionGene Leybzon
This document provides an overview of generative AI use cases for enterprises. It begins with addressing concerns that generative AI will replace jobs. The presentation then defines generative AI as AI that generates new content like text, images or code based on patterns learned from training data.
Several examples of generative AI outputs are shown including code, text, images and advice. Potential use cases for enterprises are then outlined, including synthetic data generation, code generation, code quality checks, customer service, and data analysis. The presentation concludes by emphasizing that people will be "replaced by someone who knows how to use AI", not AI itself.
The catalyst for the success of automobiles came not through the invention of the car but rather through the establishment of an innovative assembly line. History shows us that the ability to mass produce and distribute a product is the key to driving adoption of any innovation, and machine learning is no different. MLOps is the assembly line of Machine Learning and in this presentation we will discuss the core capabilities your organization should be focused on to implement a successful MLOps system.
How to set up an artificial intelligence center of excellence in your organiz...Yogesh Malik
Setting up a COE ( Center of Excellence ) for AI ( Artificial Intelligence ) could be a daunting task. Lack of skills and quality data sets could hold you back. But still you should not wait any longer and start with what you have, build skills by training people, and move ahead in gettering executive approval for building an artificial intelligence center of excellence
Artificial Intelligence Machine Learning Deep Learning PPT PowerPoint Present...SlideTeam
This PPT is for the mid level managers giving information about AI Artificial Intelligence, Machine Learning ML, Deep Learning DL, Supervised Machine Learning, Unsupervised Machine Learning, Reinforcement Learning. You can also learn the difference between Artificial Intelligence and Machine Learning and deciding which out of AI or DL or ML will be better for your business. You will also get to know about the Expert System, its examples, characteristics, components, etc. https://bit.ly/2ApMbXB
This session was presented at the AWS Community Day in Munich (September 2023). It's for builders that heard the buzz about Generative AI but can’t quite grok it yet. Useful if you are eager to connect the dots on the Generative AI terminology and get a fast start for you to explore further and navigate the space. This session is largely product agnostic and meant to give you the fundamentals to get started.
AMF303-Deep Dive into the Connected Vehicle Reference Architecture.pdfAmazon Web Services
At this fast-paced, interactive workshop, get hands-on with live data streaming from an actual car driving the streets of Las Vegas. Explore AWS IoT, common patterns, and best practices for processing IoT data, and deploy a reference architecture to begin consuming and analyzing connected vehicle data in your own AWS account. Walk away from this workshop with the knowledge needed to connect your own vehicle to the cloud.
Generative AI Use-cases for Enterprise - First SessionGene Leybzon
In this presentation, we will delve into the exciting applications of Generative AI across various business domains. Leveraging the capabilities of artificial intelligence and machine learning, Generative AI allows for dynamic, context-aware user interfaces that adapt in real-time to provide personalized user experiences. We will explore how this transformative technology can streamline design processes, facilitate user engagement, and open the doors to new forms of interactivity.
AWS offers a family of intelligent services that provide cloud-native machine learning and deep learning technologies to address different use cases and needs. This deck will help you to gain insight into practical use cases for Amazon Lex, Amazon Polly, and Amazon Rekognition, and learn about newly announced services Amazon Rekognition Video, Amazon Comprehend, Amazon Translate, and Amazon Transcribe. This presentation took place in Australia and New Zealand as part of the AWS Learning Series in 2018.
* "Responsible AI Leadership: A Global Summit on Generative AI"
*April 2023 guide for experts and policymakers
* Developing and governing generative AI systems
* + 100 thought leaders and practitioners participated
* Recommendations for responsible development, open innovation & social progress
* 30 action-oriented recommendations aim
* Navigate AI complexities
The document provides an overview of Vertex AI, Google Cloud's managed machine learning platform. It discusses topics such as managing datasets, building and training machine learning models using both automated and custom approaches, implementing explainable AI, and deploying models. The document also includes references to the Vertex AI documentation and contact information for further information.
Introduction to ChatGPT & how its implemented in UiPathsharonP24
This document provides an overview of using ChatGPT for intelligent automation through UiPath. It discusses how ChatGPT can be implemented in UiPath using web API activities. It also covers the benefits and limitations of using ChatGPT, as well as best practices for developing ChatGPT models and considerations for privacy, ethics, security and governance. The document concludes with information on UiPath's community resources for learning RPA skills and connecting with other automation professionals.
H&M uses machine learning for various use cases including logistics, production, sales, marketing, and design/buying. MLOps principles like model versioning, reproducibility, scalability, and automated training are applied to manage the machine learning lifecycle. The technical stack includes Kubernetes, Docker, Azure Databricks for interactive development, Airflow for automated training, and Seldon for model serving. The goal is to apply MLOps at scale for various prediction scenarios through a continuous integration/continuous delivery pipeline.
The document provides an overview of Amazon's machine learning capabilities including:
- Platform services like EC2 P3 instances and Deep Learning AMIs for training models
- Managed services like SageMaker for building, training, and deploying models, and applications services like Rekognition, Transcribe, Translate, and Comprehend for vision, speech and text analysis
- It describes how these capabilities are used across Amazon for applications like fulfilment, search, and developing new products
Recommendation is one of the most popular applications in machine learning (ML). In this workshop, we’ll show you how to build a movie recommendation model based on factorization machines — one of the built-in algorithms of Amazon SageMaker — and the popular MovieLens dataset.
1. The document discusses responsible AI and outlines several principles for developing AI systems responsibly, including privacy, fairness, transparency, reliability, inclusiveness, and accountability.
2. It provides examples of techniques like differential privacy and model constraints that can help mitigate privacy and fairness issues in AI systems.
3. The document also discusses the importance of transparency in AI through explainability, highlighting packages and methods for interpreting models.
This document discusses responsible artificial intelligence. It begins by showing survey results that most consumers want organizations to be accountable for misusing AI and want privacy protected. It then defines responsible AI as evaluating, developing, and implementing AI safely, reliably, and ethically. The main principles discussed are privacy using differential privacy, fairness by mitigating unfair impacts on groups, and transparency through explainable AI tools. General recommendations are given such as clarifying a system's purpose, considering social biases, and encouraging feedback. Benefits of responsible AI include minimizing unintentional bias and ensuring transparency.
In this talk, we will present an overview of Azure Machine Learning, a fully managed cloud service that enables you to easily build, deploy, and share predictive analytics solutions. We will start with the basics of machine learning and end with a demo that uses real world data.
Here is a draft email:
Subject: Automate key processes in automotive manufacturing with UiPath
Dear Tom,
My name is Ed Challis from UiPath. I understand from our mutual connection that you are the Automation Program Manager at BMW, focusing on implementing robotic process automation (RPA).
I wanted to share how some of our automotive manufacturing customers are leveraging UiPath to drive efficiencies in their operations. Specifically:
Quality inspection automation: One customer automated visual inspections on the production line to reduce defects and speed up issue resolution. This helped improve quality standards.
Supply chain management: Another customer automated PO matching, invoice processing and inventory management across their suppliers globally. This
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
Generative AI Use cases for Enterprise - Second SessionGene Leybzon
This document provides an overview of generative AI use cases for enterprises. It begins with addressing concerns that generative AI will replace jobs. The presentation then defines generative AI as AI that generates new content like text, images or code based on patterns learned from training data.
Several examples of generative AI outputs are shown including code, text, images and advice. Potential use cases for enterprises are then outlined, including synthetic data generation, code generation, code quality checks, customer service, and data analysis. The presentation concludes by emphasizing that people will be "replaced by someone who knows how to use AI", not AI itself.
The catalyst for the success of automobiles came not through the invention of the car but rather through the establishment of an innovative assembly line. History shows us that the ability to mass produce and distribute a product is the key to driving adoption of any innovation, and machine learning is no different. MLOps is the assembly line of Machine Learning and in this presentation we will discuss the core capabilities your organization should be focused on to implement a successful MLOps system.
How to set up an artificial intelligence center of excellence in your organiz...Yogesh Malik
Setting up a COE ( Center of Excellence ) for AI ( Artificial Intelligence ) could be a daunting task. Lack of skills and quality data sets could hold you back. But still you should not wait any longer and start with what you have, build skills by training people, and move ahead in gettering executive approval for building an artificial intelligence center of excellence
Artificial Intelligence Machine Learning Deep Learning PPT PowerPoint Present...SlideTeam
This PPT is for the mid level managers giving information about AI Artificial Intelligence, Machine Learning ML, Deep Learning DL, Supervised Machine Learning, Unsupervised Machine Learning, Reinforcement Learning. You can also learn the difference between Artificial Intelligence and Machine Learning and deciding which out of AI or DL or ML will be better for your business. You will also get to know about the Expert System, its examples, characteristics, components, etc. https://bit.ly/2ApMbXB
This session was presented at the AWS Community Day in Munich (September 2023). It's for builders that heard the buzz about Generative AI but can’t quite grok it yet. Useful if you are eager to connect the dots on the Generative AI terminology and get a fast start for you to explore further and navigate the space. This session is largely product agnostic and meant to give you the fundamentals to get started.
AMF303-Deep Dive into the Connected Vehicle Reference Architecture.pdfAmazon Web Services
At this fast-paced, interactive workshop, get hands-on with live data streaming from an actual car driving the streets of Las Vegas. Explore AWS IoT, common patterns, and best practices for processing IoT data, and deploy a reference architecture to begin consuming and analyzing connected vehicle data in your own AWS account. Walk away from this workshop with the knowledge needed to connect your own vehicle to the cloud.
Generative AI Use-cases for Enterprise - First SessionGene Leybzon
In this presentation, we will delve into the exciting applications of Generative AI across various business domains. Leveraging the capabilities of artificial intelligence and machine learning, Generative AI allows for dynamic, context-aware user interfaces that adapt in real-time to provide personalized user experiences. We will explore how this transformative technology can streamline design processes, facilitate user engagement, and open the doors to new forms of interactivity.
AWS offers a family of intelligent services that provide cloud-native machine learning and deep learning technologies to address different use cases and needs. This deck will help you to gain insight into practical use cases for Amazon Lex, Amazon Polly, and Amazon Rekognition, and learn about newly announced services Amazon Rekognition Video, Amazon Comprehend, Amazon Translate, and Amazon Transcribe. This presentation took place in Australia and New Zealand as part of the AWS Learning Series in 2018.
* "Responsible AI Leadership: A Global Summit on Generative AI"
*April 2023 guide for experts and policymakers
* Developing and governing generative AI systems
* + 100 thought leaders and practitioners participated
* Recommendations for responsible development, open innovation & social progress
* 30 action-oriented recommendations aim
* Navigate AI complexities
The document provides an overview of Vertex AI, Google Cloud's managed machine learning platform. It discusses topics such as managing datasets, building and training machine learning models using both automated and custom approaches, implementing explainable AI, and deploying models. The document also includes references to the Vertex AI documentation and contact information for further information.
Introduction to ChatGPT & how its implemented in UiPathsharonP24
This document provides an overview of using ChatGPT for intelligent automation through UiPath. It discusses how ChatGPT can be implemented in UiPath using web API activities. It also covers the benefits and limitations of using ChatGPT, as well as best practices for developing ChatGPT models and considerations for privacy, ethics, security and governance. The document concludes with information on UiPath's community resources for learning RPA skills and connecting with other automation professionals.
H&M uses machine learning for various use cases including logistics, production, sales, marketing, and design/buying. MLOps principles like model versioning, reproducibility, scalability, and automated training are applied to manage the machine learning lifecycle. The technical stack includes Kubernetes, Docker, Azure Databricks for interactive development, Airflow for automated training, and Seldon for model serving. The goal is to apply MLOps at scale for various prediction scenarios through a continuous integration/continuous delivery pipeline.
The document provides an overview of Amazon's machine learning capabilities including:
- Platform services like EC2 P3 instances and Deep Learning AMIs for training models
- Managed services like SageMaker for building, training, and deploying models, and applications services like Rekognition, Transcribe, Translate, and Comprehend for vision, speech and text analysis
- It describes how these capabilities are used across Amazon for applications like fulfilment, search, and developing new products
Recommendation is one of the most popular applications in machine learning (ML). In this workshop, we’ll show you how to build a movie recommendation model based on factorization machines — one of the built-in algorithms of Amazon SageMaker — and the popular MovieLens dataset.
1. The document discusses responsible AI and outlines several principles for developing AI systems responsibly, including privacy, fairness, transparency, reliability, inclusiveness, and accountability.
2. It provides examples of techniques like differential privacy and model constraints that can help mitigate privacy and fairness issues in AI systems.
3. The document also discusses the importance of transparency in AI through explainability, highlighting packages and methods for interpreting models.
This document discusses responsible artificial intelligence. It begins by showing survey results that most consumers want organizations to be accountable for misusing AI and want privacy protected. It then defines responsible AI as evaluating, developing, and implementing AI safely, reliably, and ethically. The main principles discussed are privacy using differential privacy, fairness by mitigating unfair impacts on groups, and transparency through explainable AI tools. General recommendations are given such as clarifying a system's purpose, considering social biases, and encouraging feedback. Benefits of responsible AI include minimizing unintentional bias and ensuring transparency.
Towards Responsible AI - Global AI Student Conference 2022.pptxLuis775803
The document discusses responsible and ethical AI practices. It provides statistics showing that most consumers do not fully trust how organizations implement AI and believe they should be held accountable for any misuse. It then discusses key aspects of responsible AI including differential privacy, algorithmic fairness, model explainability, oversight of AI systems, mitigating bias, and ensuring transparency.
الموعد الإثنين 03 يناير 2022
143
مبادرة
#تواصل_تطوير
المحاضرة ال 143 من المبادرة
المهندس / محمد الرافعي طرباي
نقيب المبرمجين بالدقهلية
بعنوان
"IT INDUSTRY"
How To Getting Into IT With Zero Experience
وذلك يوم الإثنين 03 يناير2022
السابعة مساء توقيت القاهرة
الثامنة مساء توقيت مكة المكرمة
و الحضور من تطبيق زووم
https://us02web.zoom.us/meeting/register/tZUpf-GsrD4jH9N9AxO39J013c1D4bqJNTcu
علما ان هناك بث مباشر للمحاضرة على القنوات الخاصة بجمعية المهندسين المصريين
ونأمل أن نوفق في تقديم ما ينفع المهندس ومهمة الهندسة في عالمنا العربي
والله الموفق
للتواصل مع إدارة المبادرة عبر قناة التليجرام
https://t.me/EEAKSA
ومتابعة المبادرة والبث المباشر عبر نوافذنا المختلفة
رابط اللينكدان والمكتبة الالكترونية
https://www.linkedin.com/company/eeaksa-egyptian-engineers-association/
رابط قناة التويتر
https://twitter.com/eeaksa
رابط قناة الفيسبوك
https://www.facebook.com/EEAKSA
رابط قناة اليوتيوب
https://www.youtube.com/user/EEAchannal
رابط التسجيل العام للمحاضرات
https://forms.gle/vVmw7L187tiATRPw9
ملحوظة : توجد شهادات حضور مجانية لمن يسجل فى رابط التقيم اخر المحاضرة
AI technologies have become ubiquitous due to improvements in computing power, data accumulation, and machine learning methods. However, AI systems also face security risks such as model manipulation, data tampering, and physical world attacks. To address these challenges, researchers are developing defenses such as adversarial training and detection methods. One approach is blackbox testing, where testers investigate systems like attackers with minimal internal knowledge, in order to detect vulnerabilities and plan attacks.
This document discusses tools and frameworks for developing responsible AI solutions. It begins by outlining some of the costs of AI incidents, such as harm to human life, loss of trust, and fines. It then discusses defining responsible AI principles like respecting human rights, enabling human oversight, and transparency. The document provides examples of bias that can occur in AI systems and tools to detect and mitigate bias. It discusses the importance of a human-centric design approach and case studies of bias in systems. Finally, it outlines best practices for developing responsible AI like integrating tools and certifications.
This document discusses several potential dangers of artificial intelligence including adversarial attacks, bias in data, ethics issues, and security concerns. It provides examples of different types of biases that can occur such as sampling bias, measurement bias, and stereotype bias. The document also discusses challenges in testing AI systems and quotes several experts on the impacts and limitations of AI.
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...Francesca Lazzeri, PhD
Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them. In this session, Francesca will go over a few methods and tools that enable you to “unpack" machine learning models, gain insights into how and why they produce specific results, assess your AI systems fairness and mitigate any observed fairness issues.
Using open source fairness and interpretability packages, attendees will learn how to:
- Explain model prediction by generating feature importance values for the entire model and/or individual datapoints.
- Achieve model interpretability on real-world datasets at scale, during training and inference.
- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
- Leverage additional interactive visualizations to assess which groups of users might be negatively impacted by a model and compare multiple models in terms of their fairness and performance.
AI Cybersecurity: Pros & Cons. AI is reshaping cybersecurityTasnim Alasali
Discover how AI is reshaping cybersecurity. This presentation delves into AI's role in enhancing threat detection, the balance of innovation and risk, and the strategies shaping the future of digital defense.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
This document discusses several important technologies used to develop digital libraries, including blockchain, artificial intelligence, Docker, and Kubernetes. Blockchain can be used to develop archiving services between institutions securely and electronically without the need for online databases. Docker and Kubernetes help develop the infrastructure flexibly by installing software directly on any web server without the need for programming. The document also discusses data mining concepts like classification, regression, clustering, and recommendation that can be used in library services with artificial intelligence. Machine learning tasks and techniques are also covered.
Data scientists have a duty to ensure they analyze data and train machine learning models responsibly; respecting individual privacy, mitigating bias, and ensuring transparency. This module explores some considerations and techniques for applying responsible machine learning principles.
Using AI to Build Fair and Equitable WorkplacesData Con LA
Data Con LA 2020
Description
With recent events putting a spotlight on anti-racism, social-justice, climate change, and mental health there's a call for increased ethics and transparency in business. Companies are, rightfully, feeling responsible for providing underrepresented employees with the same treatment and opportunities as their majority counterparts. AI can, and will, be used to help companies understand their environment, develop strategies for improvement and monitor progress. And, as AI is used to make increasingly complex and life-changing decisions, it is critical to ensure that these decisions are fair, equitable and explainable. Unfortunately, it is becoming increasingly clear that, much like humans, AI can be biased. It is therefore imperative that as we develop AI solutions, we are fully aware of the dangers of bias, understand how bias can manifest and know how to take steps to address and minimize it.
In this session you will learn:
*Definitions of fairness, regulated domains and protected classes
*How bias can manifest in AI
*How bias in AI can be measured, tracked and reduced
*Best practices for ensuring that bias doesn't creep into AI/ML models over time
*How explainability can be used to perform real-time checks on predictions
Speakers
Lawrence Spracklen, RSquared AI, Engineering Leadership
Sonya Balzer, RSquared.ai, Director of AI Marketing
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...DataScienceConferenc1
The document proposes an AI Ethics Framework for generative AI systems such as chatbots. It discusses the need to integrate AI ethics and quality into the design, development, implementation, testing and operation of AI products. The framework aims to provide a strategic, business-driven approach for building ethical, sustainable and secure AI. It covers areas like requirements engineering, development processes, project management, and evaluation of AI architectures from an ethics perspective.
Ethical AI: Establish an AI/ML Governance framework addressing Reproducibility, Explainability, Bias & Accountability for Enterprise AI use-cases.
Presentation on “Open Source Enterprise AI/ML Governance” at Linux Foundation’s Open Compliance Summit, Dec 2020 (https://events.linuxfoundation.org/open-compliance-summit/)
Full article: https://towardsdatascience.com/ethical-ai-its-implications-for-enterprise-ai-use-cases-and-governance-81602078f5db
Generative AI's impact on creativity and productivity is undeniable. This presentation dives into real-world security and privacy risks, along with methods to address them. Can generative AI be used for cybersecurity? Let's explore!
Keynote presentation from ECBS conference. The talk is about how to use machine learning and AI in improving software engineering. Experiences from our project in Software Center (www.software-center.se).
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
This document provides information about an AI certification course for the Microsoft Azure AI Fundamentals exam (AI-900). It outlines the intended audience, prerequisites, language, content included, exam details, and types of artificial intelligence. The course is intended for anyone interested in learning the basics of AI or clearing the AI-900 exam. It includes over 8 hours of video content, practice tests, quizzes and other study materials. Upon completion, students will receive a certificate and lifetime access to the course content.
Similar to Building responsible AI models in Azure Machine Learning.pptx (20)
TalentLand - Entendiendo tus documentos con Azure Form Recognizer.pptxLuis775803
Este documento presenta Azure Form Recognizer, un servicio que permite extraer datos estructurados de formularios y documentos. Form Recognizer utiliza aprendizaje automático para comprender el contenido y la estructura de los formularios sin necesidad de etiquetado manual. El documento describe cómo Form Recognizer puede reconocer diferentes tipos de formularios y proporciona una demostración de sus capacidades. Finalmente, destaca los beneficios de automatizar los flujos de trabajo y reducir costos mediante el uso de Form Recognizer.
IA Conversacional con Power Virtual Agents.pptxLuis775803
El documento describe las herramientas Power Platform y Power Virtual Agents de Microsoft para la creación de bots conversacionales sin necesidad de experiencia en codificación. Power Virtual Agents permite a los equipos crear fácilmente bots que automatizan consultas comunes y mejoran la satisfacción del cliente al permitir la autoayuda las 24 horas. QnA Maker es un servicio de Azure que permite importar pares de preguntas y respuestas para crear una base de conocimientos para bots.
Colombia Cloud Bootcamp - IA y Accesibilidad Pronunciation Assessment.pptxLuis775803
Este documento presenta una sesión en el Cloud Bootcamp 2022 en Colombia sobre el uso de la evaluación de pronunciación de Inteligencia Artificial para mejorar la fluidez de lectura. La sesión incluye demostraciones de cómo evaluar la pronunciación de texto y palabras a través de la plataforma Speech Studio de Microsoft y una aplicación móvil. También se discuten casos de éxito comerciales y limitaciones técnicas de la evaluación de pronunciación.
STEMWeek - Entendiendo tus documentos con Azure Form Recognizer.pptxLuis775803
Este documento presenta Form Recognizer, un servicio de Azure Applied AI que permite extraer datos estructurados a partir de formularios mediante aprendizaje automático sin necesidad de etiquetado manual. Se explica que con solo unos pocos formularios de muestra se puede entrenar un modelo para reconocer diferentes tipos de formularios como facturas, solicitudes de tarjetas de crédito y formularios médicos. Finalmente, se destacan los beneficios de automatizar los flujos de trabajo y reducir costos al leer y procesar documentos.
Student Summit - Conoce más sobre mi carrera en IA y Datos.pptxLuis775803
Este documento presenta la carrera de Luis Beltrán en inteligencia artificial y datos. Originalmente era ingeniero en sistemas computacionales y docente. Se interesó en aplicar la IA en dispositivos móviles y aprendió algoritmos inspirados por la naturaleza, técnicas de optimización, visión computacional y procesamiento de imágenes. Ofrece consejos como estudiar, trabajar y practicar nuevas cosas, seguir tus sueños y mejorar tu inglés.
Gira Speaker Latam - IA y Accesibilidad con Pronunciation Assessment.pptxLuis775803
Este documento presenta una charla sobre la evaluación de pronunciación mediante Inteligencia Artificial. Se muestran dos demostraciones de cómo usar Pronunciation Assessment de Microsoft para evaluar la pronunciación de texto y palabras. También se discuten casos de éxito comerciales y limitaciones técnicas del servicio.
Build After Party Bolivia - Hugging Face on Azure.pptxLuis775803
El documento habla sobre Hugging Face en Azure. Hugging Face es la biblioteca de procesamiento de lenguaje natural más popular de código abierto, utilizada en más de 1000 trabajos de investigación y empresas. Ahora, Hugging Face se integra con Azure para permitir que los clientes implementen y ejecuten modelos de lenguaje preentrenados de Hugging Face en la nube de Azure de forma sencilla y segura.
Microsoft Reactor - Creando un modelo de Regresión con Azure Machine Learnin...Luis775803
Este documento presenta una sesión de capacitación sobre cómo crear un modelo de regresión utilizando Azure Machine Learning Designer. La sesión cubre la configuración del entorno de trabajo, la exploración y limpieza de datos, el entrenamiento del modelo, la creación de un pipeline de inferencia y la implementación de un servicio predictivo. El objetivo es enseñar a los asistentes a crear una aplicación predictiva mediante el uso de Azure ML.
Este documento presenta .NET MAUI, una plataforma de aplicaciones multiplataforma de Microsoft que permite construir aplicaciones para iOS, Android, Windows y macOS compartiendo gran parte del código. Ofrece una interfaz de usuario nativa, acceso a las API de los sistemas operativos y rendimiento nativo mediante la compilación para cada plataforma. El ecosistema .NET proporciona herramientas como Visual Studio, bibliotecas compartidas y compatibilidad con Blazor para el desarrollo web.
SISWeek Creando un sistema de reconocimiento facial con Face API.pptxLuis775803
Este documento presenta a Luis Beltrán, un experto en Inteligencia Artificial y tecnologías de desarrollo de Microsoft. Describe los servicios cognitivos de Azure como Visión, Voz, Lenguaje y Decisión. Se enfoca en la API Face de Visión, que puede detectar e identificar personas y expresiones faciales en imágenes. Incluye recomendaciones para el uso ético del reconocimiento facial y enlaces a documentación y demostraciones.
Este documento describe Azure Storage y sus cuatro casos de uso principales: 1) almacenar archivos, 2) frecuencia de acceso, 3) seguridad, y 4) hospedaje de sitios web estáticos. Explica que Azure Storage contiene cuentas, contenedores y blobs para almacenar y acceder a archivos y objetos. También proporciona un ejemplo de URL de blob con una cadena de consulta que incluye una firma de acceso compartido.
Conoce las novedades de .NET MAUI en .NET 7.pptxLuis775803
Este documento resume las principales novedades de .NET MAUI en .NET 7, incluyendo mejoras para aplicaciones de escritorio como context menus, tooltips y gestos, la adición de un MapControl para iOS y Android, y mejoras en el rendimiento. También cubre cómo actualizar aplicaciones de .NET MAUI 6 a .NET MAUI 7 y el ciclo de vida de soporte.
Power BI Summit 2023 - Embedding PowerBI reports in .NET MAUI mobile apps.pptxLuis775803
The document discusses embedding Power BI reports in .NET MAUI mobile apps. It begins with an introduction of .NET MAUI, a framework for building native cross-platform apps with shared code and resources. It then explains Power BI embedding, which allows integrating Power BI reports and analytics within apps. The remainder of the document outlines the steps needed to embed Power BI reports in a .NET MAUI mobile app, including fulfilling requirements, selecting an authentication method, registering an Azure AD application, creating a Power BI workspace, and embedding the report. It concludes with a demonstration of an example app.
Mes de Datos Ciencia de Datos a otro nivel con Azure Machine Learning.pptxLuis775803
Este documento presenta tres demostraciones sobre Azure Machine Learning: la creación de un espacio de trabajo de Azure ML, el uso del Diseñador de Azure ML y los cuadernos de Azure ML, con una sección de preguntas y respuestas al final.
Este documento presenta una introducción a Azure y la certificación AZ-900. Explica conceptos clave como el cómputo en la nube, ventajas de Azure como escalabilidad y disponibilidad, y servicios principales como Blob Storage, Virtual Machines y App Service. También describe la estructura de cuentas de Azure y los temas cubiertos en la certificación AZ-900 como conceptos básicos de nube, servicios de Azure y suscripciones.
Virtual Azure Community Day - Workloads de búsqueda full-text Azure Search.pptxLuis775803
The document discusses the steps involved in processing a query which include query parsing, lexical analysis, document retrieval, and scoring. It also mentions some key terms related to searching including data source, skillset, indexer, and index.
Global Azure 2022 en Español - Clasificacion de imagenes con Azure Machine L...Luis775803
Este documento presenta una conferencia sobre Machine Learning y Azure Machine Learning que tendrá lugar del 5 al 7 de mayo de 2022. Incluye información sobre el orador, Luis Beltrán, quien es investigador y docente en temas de inteligencia artificial y tecnologías de desarrollo. También explica brevemente los conceptos básicos de machine learning y cómo Azure Machine Learning puede simplificar el proceso de ciencia de datos. Finalmente, proporciona enlaces a documentación y ejemplos prácticos para entrenar y desplegar modelos de clasificación de im
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
4. Responsible AI
It is an approach to evaluating, developing and implementing AI
systems in a safe, reliable and ethical manner, and making responsible
decisions and actions.
Generally speaking, Responsible AI is the practice of upholding the principles of AI when designing, building,
and using artificial intelligence systems.
11. Differential privacy adds noise so the maximum impact of an individual
on the outcome of an aggregative analysis is at most epsilon (ϵ)
• The incremental privacy risk between opting out vs participation
for any individual is governed by ϵ
• Lower ϵ values result in greater privacy but lower accuracy
• Higher ϵ values result in greater accuracy with higher risk of individual
identification
12.
13.
14.
15.
16.
17.
18.
19. 2. Fairness
Absence of negative impact on groups based on:
Ethnicity
Gender
Age
Physical disability
Other sensitive features
20.
21.
22.
23.
24. Mitigating Unfairness
Create models with parity constraints:
Algorithms:
• Exponentiated Gradient - A *reduction* technique that applies a cost-
minimization approach to learning the optimal trade-off of overall predictive
performance and fairness disparity (Binary classification and regression)
• Grid Search - A simplified version of the Exponentiated Gradient algorithm
that works efficiently with small numbers of constraints (Binary classification
and regression)
• Threshold Optimizer - A *post-processing* technique that applies a
constraint to an existing classifier, transforming the prediction as
appropriate (Binary classification)
25. Mitigating Unfairness
Constraints:
• Demographic parity: Minimize disparity in the selection rate across sensitive
feature groups.
• True positive rate parity: Minimize disparity in true positive rate across
sensitive feature groups
• False positive rate parity: Minimize disparity in false positive rate across
sensitive feature groups
• Equalized odds: Minimize disparity in combined true positive rate and false
positive rate across sensitive feature groups
• Error rate parity: Ensure that the error for each sensitive feature group does
not deviate from the overall error rate by more than a specified amount
• Bounded group loss: Restrict the loss for each sensitive feature group in a
regression model
31. Building responsible AI models in
Azure Machine Learning
Luis Beltrán
luis@luisbeltran.mx
Thank you for your attention!
Editor's Notes
When we talk about AI, we usually refer to a machine learning model that is used within a system to automate something. For example, a self-driving car can take images using sensors. A machine learning model can use these images to make predictions (for example, the object in the image is a tree). These predictions are used by the car to make decisions (for example, turn left to avoid the tree). We refer to this whole system as AI.
When AI is developed, there are risks that it will be unfair or seen as a black box that makes decisions for humans.
For example, another model that analyzes a person's information (such as their salary, nationality, age, etc.) and decides whether to grant them a loan or not. Human participation is limited in those decisions made by the system. This can lead to many potential problems and companies need to define a clear approach to the use of AI. Responsible AI is a governance framework meant to do exactly that.
Responsible AI is the practice of designing, developing, and deploying AI with good intent to empower employees and businesses, and impact customers and society fairly, safely, and ethically, enabling organizations to build trust and scale AI more securely.
They are the product of many decisions made by those who develop and implement them. From the purpose of the system to the way people interact with AI systems, responsible AI can help proactively guide decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.
Evaluating and researching ML models before their implementation remains at the core of reliable and responsible AI development.
Microsoft has developed a Responsible AI Standard. It's a framework for building AI systems according to six key principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the foundations of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day.
Let’s talk about some of the principles
AI systems like facial recognition or voice tagging can definitely be used to breach an individual's privacy and threaten security. How an individual's online footprint is used to track, deduce and influence someone's preferences or perspectives is a serious concern that needs to be addressed. The way in which "fake news" or "deep fakes" influence public opinion also represents a threat to individual or social security. AI systems are increasingly misused in this domain. There is a pertinent need to establish a framework that protects an individual's privacy and security.
Privacy is any data that can identify an individual and/or their location, activities and interests. Such data is generally subject to strict privacy and compliance laws, for example GDPR in Europe. AI systems must comply with privacy laws that require transparency about the collection, use, and storage of data. It should require consumers to have adequate controls in choosing how their data is used.
Data science projects, including machine learning projects, involve analysis of data; and often that data includes sensitive personal details that should be kept private.
In practice, most reports that are published from the data include aggregations of the data, which you may think would provide some privacy – after all, the aggregated results do not reveal the individual data values.
However, consider a case where multiple analyses of the data result in reported aggregations that when combined, could be used to work out information about individuals in the source dataset. In the example on the slide, 10 participants share data about their location and salary. The aggregated salary data tells us the average salary in Seattle; and the location data tells us that 10% of the study participants (in other words, a single person) is based in Seattle – so we can easily determine the specific salary of the Seattle-based participant.
Anyone reviewing both studies who happens to know a person from Seattle who participated, now knows that person's salary.
Differential privacy seeks to protect individual data values by adding statistical "noise" to the analysis process. The math involved in adding the noise is quite complex, but the principle is fairly intuitive – the noise ensures that data aggregations stay statistically consistent with the actual data values allowing for some random variation, but make it impossible to work out the individual values from the aggregated data. In addition, the noise is different for each analysis, so the results are non-deterministic – in other words, two analyses that perform the same aggregation may produce slightly different results.
Two open source packages that can enable further implementation of privacy and security principles:
Counterfit: Counterfit is an open-source project comprising a command-line tool and generic automation layer to enable developers to simulate cyberattacks against AI systems and verify their security.
SmartNoise: SmartNoise is a project (co-developed by Microsoft) that contains components for building differentially private systems that are global.
Built-in support for training simple machine learning models like linear and logistic regression
Compatible with open-source training libraries such TensorFlow Privacy
You can use SmartNoise to create an analysis in which noise is added to the source data. The underlying mathematics of how the noise is added can be quite complex, but SmartNoise takes care of most of the details for you
Epsilon: The amount of variation caused by adding noise is configurable through a parameter called epsilon. This value governs the amount of additional risk that your personal data can be identified. The key thing is that it applies this privacy principle for every member in the data. A low epsilon value provides the most privacy, at the expense of less accuracy when aggregating the data. A higher epsilon value results in aggregations that are more true to the actual data distribution, but in which the individual contribution of a single individual to the aggregated value is less obscured by noise.
However, there are a few concepts it's useful to be aware of.
Upper and lower bounds: Clamping is used to set upper and lower bounds on values for a variable. This is required to ensure that the noise generated by SmartNoise is consistent with the expected distribution of the original data.
Sample size: To generate consistent differentially private data for some aggregations, SmartNoise needs to know the size of the data sample to be generated.
It's common when analyzing data to examine the distribution of a variable using a histogram.
For example, let's look at the true distribution of ages in the diabetes dataset.
The histograms are similar enough to ensure that reports based on the differentially private data provide the same insights as reports from the raw data.
Now let's compare that with a differentially private histogram of Age.
Another common goal of analysis is to establish relationships between variables. SmartNoise provides a differentially private covariance function that can help with this.
In this case, the covariance between Age and DisatolicBloodPressure is positive, indicating that older patients tend to have higher blood pressure.
In addition to the Analysis functionality, SmartNoise enables you to use SQL queries against data sources to retrieve differentially private aggregated results.
First, you need to define the metadata for the tables in your data schema. You can do this in a .yml file, such as the diabetes.yml file in the /metadata folder. The metadata describes the fields in the tables, including data types and minimum and maximum values for numeric fields.
With the metadata defined, you can create readers that you can query. In the following example, we'll create a PandasReader to read the raw data from a Pandas dataframe, and a PrivateReader that adds a differential privacy layer to the PandasReader.
Now you can submit a SQL query that returns an aggregated resultset to the private reader.
Let's compare the result to the same aggregation from the raw data.
You can customize the behavior of a PrivateReader with the epsilon_per_column parameter.
Let's try a reader with a high epsilon (low privacy) value, and another with a low epsilon (high privacy) value.
Note that the results of the high epsilon (low privacy) reader are closer to the true results from the raw data than the results from the low epsilon (high privacy) reader.
Machine learning models are increasingly used to inform decisions that affect peoples lives. For example, prediction made by a machine learning model might influence:
- Approval for a loan, insurance, or other financial service.
- Acceptance into a school or college course.
- Eligibility for a medical trial or experimental treatment.
- Inclusion in a marketing promotion.
- Selection for employment or promotion.
With such critical decisions in the balance, it's important to have confidence that the machine learning models we rely on predict fairly, and don't discriminate for or against subsets of the population based on ethnicity, gender, age, or other factors.
Fairness and inclusiveness in Azure Machine Learning: The fairness assessment component of the Responsible AI dashboard enables data scientists and developers to assess model fairness across sensitive groups defined in terms of gender, ethnicity, age, and other characteristics.
The Responsible AI dashboard provides a single interface to help you implement Responsible AI in practice effectively and efficiently. It brings together several mature Responsible AI tools in the areas of:
Model performance and fairness assessment
Data exploration
Machine learning interpretability
Error analysis
Counterfactual analysis and perturbations
Causal inference
The dashboard offers a holistic assessment and debugging of models so you can make informed data-driven decisions. Having access to all of these tools in one interface empowers you to:
Evaluate and debug your machine learning models by identifying model errors and fairness issues, diagnosing why those errors are happening, and informing your mitigation steps.
Boost your data-driven decision-making abilities by addressing questions such as:
"What is the minimum change that users can apply to their features to get a different outcome from the model?"
"What is the causal effect of reducing or increasing a feature (for example, red meat consumption) on a real-world outcome (for example, diabetes progression)?"
you'll use the Fairlearn package to analyze a model and explore disparity in prediction performance for different subsets of data based on specific features, such as age.
To use the Fairlearn package with Azure Machine Learning, you need the Azure Machine Learning and Fairlearn Python packages, so run the following cell verify that the azureml-contrib-fairness package is installed.
Train model
After that, you can use the Fairlearn package to compare its behavior for different sensitive feature values.
A mix of fairlearn and scikit-learn metric functions are used to calculate the performance values.
Use scikit-learn metric functions to calculate overall accuracy, recall, and precision metrics.
Use the fairlearn selection_rate function to return the selection rate (percentage of positive predictions) for the overall population.
Use a MetricFrame to calculate selection rate, accuracy, recall, and precision for each age group in the Age sensitive feature.
From these metrics, you should be able to discern that a larger proportion of the older patients are predicted to be diabetic. Accuracy should be more or less equal for the two groups, but a closer inspection of precision and recall indicates some disparity in how well the model predicts for each age group.
The model does a better job of this for patients in the older age group than for younger patients.
It's often easier to compare metrics visually. To do this, you'll use the Fairlearn fairness dashboard:
When the widget is displayed, use the Get started link to start configuring your visualization.
Select the sensitive features you want to compare (in this case, there's only one: Age).
Select the model performance metric you want to compare (in this case, it's a binary classification model so the options are Accuracy, Balanced accuracy, Precision, and Recall). Start with Recall.
Select the type of fairness comparison you want to view. Start with Demographic parity difference.
The choice of parity constraint depends on the technique being used and the specific fairness criteria you want to apply. Constraints include:- Demographic parity: Use this constraint with any of the mitigation algorithms to minimize disparity in the selection rate across sensitive feature groups. For example, in a binary classification scenario, this constraint tries to ensure that an equal number of positive predictions are made in each group.
View the dashboard charts, which show:
Selection rate - A comparison of the number of positive cases per subpopulation.
False positive and false negative rates - how the selected performance metric compares for the subpopulations, including underprediction (false negatives) and overprediction (false positives).
Edit the configuration to compare the predictions based on different performance and fairness metrics.
The results show a much higher selection rate for patients over 50 than for younger patients. However, in reality, age is a genuine factor in diabetes, so you would expect more positive cases among older patients.
If we base model performance on accuracy (in other words, the percentage of predictions the model gets right), then it seems to work more or less equally for both subpopulations. However, based on the precision and recall metrics, the model tends to perform better for patients who are over 50 years old.
A common approach to mitigation is to use one of the algorithms and constraints to train multiple models, and then compare their performance, selection rate, and disparity metrics to find the optimal model for your needs. Often, the choice of model involves a trade-off between raw predictive performance and fairness. Generally, fairness is measured by reduction in disparity of feature selection or by a reduction in disparity of performance metric.
To train the models for comparison, you use mitigation algorithms to create alternative models that apply parity constraints to produce comparable metrics across sensitive feature groups. Some common algorithms used to optimize models for fairness.
GridSearch trains multiple models in an attempt to minimize the disparity of predictive performance for the sensitive features in the dataset (in this case, the age groups)
- Exponentiated Gradient - A *reduction* technique that applies a cost-minimization approach to learning the optimal trade-off of overall predictive performance and fairness disparity (Binary classification and regression)
- Grid Search - A simplified version of the Exponentiated Gradient algorithm that works efficiently with small numbers of constraints (Binary classification and regression)
- Threshold Optimizer - A *post-processing* technique that applies a constraint to an existing classifier, transforming the prediction as appropriate (Binary classification)
The choice of parity constraint depends on the technique being used and the specific fairness criteria you want to apply.
The EqualizedOdds parity constraint tries to ensure that models that exhibit similar true and false positive rates for each sensitive feature grouping.
The models are shown on a scatter plot. You can compare the models by measuring the disparity in predictions (in other words, the selection rate) or the disparity in the selected performance metric (in this case, recall). In this scenario, we expect disparity in selection rates (because we know that age is a factor in diabetes, with more positive cases in the older age group). What we're interested in is the disparity in predictive performance, so select the option to measure Disparity in recall.
The chart shows clusters of models with the overall recall metric on the X axis, and the disparity in recall on the Y axis. Therefore, the ideal model (with high recall and low disparity) would be at the bottom right corner of the plot. You can choose the right balance of predictive performance and fairness for your particular needs, and select an appropriate model to see its details.
An important point to reinforce is that applying fairness mitigation to a model is a trade-off between overall predictive performance and disparity across sensitive feature groups - generally you must sacrifice some overall predictive performance to ensure that the model predicts fairly for all segments of the population.
The chart shows clusters of models with the overall recall metric on the X axis, and the disparity in recall on the Y axis. Therefore, the ideal model (with high recall and low disparity) would be at the bottom right corner of the plot. You can choose the right balance of predictive performance and fairness for your particular needs, and select an appropriate model to see its details.
An important point to reinforce is that applying fairness mitigation to a model is a trade-off between overall predictive performance and disparity across sensitive feature groups - generally you must sacrifice some overall predictive performance to ensure that the model predicts fairly for all segments of the population.
By way of conclusion, we recall that the principles that are recommended to follow to develop a responsible AI are
Reliability: We need to make sure that the systems we develop are consistent with the ideas, values, and design principles so that they don't create any harm in the world.
Privacy: Complexity is part of AI systems, more data is needed and our software must ensure that that data is protected, that it is not leaked or disclosed.
Inclusiveness: Empower and engage people by making sure no one is left out. Consider inclusion and diversity in your models so that the entire spectrum of communities is covered.
Transparency: Transparency means that people creating AI systems must be open about how and are using AI and also open about the limitations of their systems. Transparency also means interpretability, which refers to the fact that people must be able to understand the behavior of AI systems. As a result, transparency helps gain more trust from users.
Accountability: Define best practices and processes that AI professionals can follow, such as commitment to equity, to consider at every step of the AI lifecycle.