Shift AI was a success, connecting hundreds of professionals that were eager to propel the progress of AI and discuss the newest technologies in data mining, machine learning and neural networks. More at https://ai.shiftconf.co/.
Talk description:
RPA has exploded in recent years leading to never-before-seen levels of enterprise and desktop automation. As automation continues across the enterprise attention has turned from simple, deterministic processes to probabilistic document-driven processes now typically referred to as intelligent process automation (IPA). IPA has a number of new requirements and best practices distinct from those of RPA. Topics include human-in-the-loop learning, biases in model-chaining, and business-driven model efficacy assessment.
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift Conference
Shift AI was a success, connecting hundreds of professionals that were eager to propel the progress of AI and discuss the newest technologies in data mining, machine learning and neural networks. More at https://ai.shiftconf.co/.
Talk description:
With all the breakthroughs in Machine Learning space, ML models are now being used to make decisions affecting the lives of humans, more than ever. Hence judging the quality of a model can no longer only fulfilled by accuracy, precision, and recall. It's important to understand that each individual and group of people is being treated with equality without any historical bias existed in the data. This talk focuses on some of the many potential ways to establish fairness as metrics for ML models in your organization. Also, my learnings and challenges, I encountered while building a fairness tool for data scientists and business stakeholders.
Demo: Algorithmic Fairness Tool (AFT) was an innovation project, done at Accenture The Dock, which focused on bringing the latest research from academia and building a tool for the industry.
Decision Intelligence: a new discipline emergesLorien Pratt
Where will the value be in AI when the hype is gone? Decision Intelligence is what's next: it is to AI as software engineering was to coding: a bridge from important problems to AI solutions. But also much more: integrating complex systems analysis, agent-based modeling, and many other discplines, and forming the seeds of a Solutions Renaissance, where people work together with smart machines to solve the hardest problems faced by humanity.
Talk presented at the Analytics Frontiers Conference in Charlotte on March 21. The presentation evaluates opportunities and risks of AI and how consumers, businesses, society and governments can mitigate some of the risks.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Keynote presentation on policy approaches to socio-technical causes of algorithmic bias at the Bias in Information, Algorithms and Systems workshop at the iConference on 25 March 2018.
What is algorithmic bias, and what does it mean for an algorithm to be fair or unfair? This talk explores fair decision making in the context of criminal justice, lending, hiring, and so on, providing both intuitions and their connection to legal and mathematical principles. It describes the basic frameworks of "allocative fairness," that is, fairness when giving out a benefit or a punishment.
Talk video and more at http://jonathanstray.com/introduction-to-algorithmic-bias
A talk at Code for America HQ in San Francisco.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift Conference
Shift AI was a success, connecting hundreds of professionals that were eager to propel the progress of AI and discuss the newest technologies in data mining, machine learning and neural networks. More at https://ai.shiftconf.co/.
Talk description:
With all the breakthroughs in Machine Learning space, ML models are now being used to make decisions affecting the lives of humans, more than ever. Hence judging the quality of a model can no longer only fulfilled by accuracy, precision, and recall. It's important to understand that each individual and group of people is being treated with equality without any historical bias existed in the data. This talk focuses on some of the many potential ways to establish fairness as metrics for ML models in your organization. Also, my learnings and challenges, I encountered while building a fairness tool for data scientists and business stakeholders.
Demo: Algorithmic Fairness Tool (AFT) was an innovation project, done at Accenture The Dock, which focused on bringing the latest research from academia and building a tool for the industry.
Decision Intelligence: a new discipline emergesLorien Pratt
Where will the value be in AI when the hype is gone? Decision Intelligence is what's next: it is to AI as software engineering was to coding: a bridge from important problems to AI solutions. But also much more: integrating complex systems analysis, agent-based modeling, and many other discplines, and forming the seeds of a Solutions Renaissance, where people work together with smart machines to solve the hardest problems faced by humanity.
Talk presented at the Analytics Frontiers Conference in Charlotte on March 21. The presentation evaluates opportunities and risks of AI and how consumers, businesses, society and governments can mitigate some of the risks.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Keynote presentation on policy approaches to socio-technical causes of algorithmic bias at the Bias in Information, Algorithms and Systems workshop at the iConference on 25 March 2018.
What is algorithmic bias, and what does it mean for an algorithm to be fair or unfair? This talk explores fair decision making in the context of criminal justice, lending, hiring, and so on, providing both intuitions and their connection to legal and mathematical principles. It describes the basic frameworks of "allocative fairness," that is, fairness when giving out a benefit or a punishment.
Talk video and more at http://jonathanstray.com/introduction-to-algorithmic-bias
A talk at Code for America HQ in San Francisco.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
ML practitioners and advocates are increasingly finding themselves becoming gatekeepers of the modern world. The models you create have power to get people arrested or vindicated, get loans approved or rejected, determine what interest rate should be charged for such loans, who should be shown to you in your long list of pursuits on your Tinder, what news do you read, who gets called for a job phone screen or even a college admission... the list goes on. My goal in this talk is to summarize the kinds of disparate outcomes that are caused by cargo cult machine learning, and recent academic efforts to address some of them.
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Adriano Soares Koshiyama
The workshop session focuses on the following topics:
Introduction to AI & Machine Learning (Algorithms)
Key Components of Algorithmic Impact Assessment
Algorithmic Explainability
Algorithmic Fairness
Algorithmic Robustness
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
We now live in a world where we trust intelligent systems blindly, believing in their rationality and objectivity. However, in reality this is far from the truth.
In this talk given at the City.AI Singapore chapter, we explored the nature, implications and handling strategies for Model Bias in AI.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...Jon Mead
'Machine learning’ is one of those cringy phrases, almost (if not already) taboo in the world of high-tech SaaS. Applying true machine learning to an organization’s product(s), however, can have real benefit for the business, its clients, and the industry as a whole. From credit card fraud investigations to the way that a car is built, machine learning has permeated our everyday life without a common understanding of what it is and how to implement it.
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
Fairness, robustness, and explainability in AI are some of the key cornerstones of trustworthy AI. Through its open source projects, IBM and IBM Research bring together the developer, data science and research community to accelerate the pace of innovation and instrument trust into AI.
Explainable AI with H2O Driverless AI's MLI moduleMartin Dvorak
My H2O.ai Prague meetup presentation on explainable AI, model debugging, strategies to identify security vulnerabilities and undesired ML biases + explanation how Driverless AI Machine Learning Interpretability module helps to mitigate aforementioned problems.
Simple Measures, Big Results: Measuring Program Impact DataTechSoup
Simple Measures, Big Results: How to Collect, Analyze, and Share Program Impact Data
Webinar:
Simple Measures, Big Results: How to Collect, Analyze, and Share Program Impact Data
Simple Measures, Big Results: How to Collect, Analyze, and Share Program Impact Data
Tuesday, May 28
11 a.m. Pacific Time
Mention "evaluation" to a nonprofit leader and it may conjure up visions of complex measures, expensive consultants, privacy concerns, and high overhead. Yet there are lightweight, empirically sound outcome measures that can be collected, analyzed, and shared by virtually any organization. In this webinar, we will walk through selecting, collecting, and analyzing such measures and then creating dynamic dashboards in Power BI to share the results.
Brief summary of how the law and legal practice may be affected by the ris of AI and autonomous cars, robots, etc - with a look at what harms or biases may result and how law and the market might try to solve those problems.
Algorithms are taking control of our information rich world. As the twin sibling to Big Data, increasingly they decide how society views us via constructed profiles (as criminals? as terrorists? as rich or poor consumers?); what we see as important, newsworthy, cool or profitable (eg Twitter trending topics, automated stock selling, Amazon recommendations, BBC website top news topics etc); and indeed what we see at all as algorithms are increasingly used to filter our illegal or undesirable content as tools of public policy. Algorithms are peceived by virtue of their automation as neutral, objective and fair, unlike human decision makers - yet evidence increasingly shows the opposite - eg a series of legal complaints assert that Google games its own search results to promote its own economic interests and demote those of competitors or annoyances; while in the defamation field, French, German and Italian courts have decided that algorithmically generated autosuggests in search can be libellous (eg "Bettina Wolf prostitute"). . This paper asks if any legal remedies do or should exist to *audit* proprietary algorithms , given their importance, and asks if one way forward might be via existing and future subject access rights to personal data in EU data protection law. The transformation of these rights as proposed in the draft Data Protection Regulation is not however hopeful.
ML practitioners and advocates are increasingly finding themselves becoming gatekeepers of the modern world. The models you create have power to get people arrested or vindicated, get loans approved or rejected, determine what interest rate should be charged for such loans, who should be shown to you in your long list of pursuits on your Tinder, what news do you read, who gets called for a job phone screen or even a college admission... the list goes on. My goal in this talk is to summarize the kinds of disparate outcomes that are caused by cargo cult machine learning, and recent academic efforts to address some of them.
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Adriano Soares Koshiyama
The workshop session focuses on the following topics:
Introduction to AI & Machine Learning (Algorithms)
Key Components of Algorithmic Impact Assessment
Algorithmic Explainability
Algorithmic Fairness
Algorithmic Robustness
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
We now live in a world where we trust intelligent systems blindly, believing in their rationality and objectivity. However, in reality this is far from the truth.
In this talk given at the City.AI Singapore chapter, we explored the nature, implications and handling strategies for Model Bias in AI.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...Jon Mead
'Machine learning’ is one of those cringy phrases, almost (if not already) taboo in the world of high-tech SaaS. Applying true machine learning to an organization’s product(s), however, can have real benefit for the business, its clients, and the industry as a whole. From credit card fraud investigations to the way that a car is built, machine learning has permeated our everyday life without a common understanding of what it is and how to implement it.
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
Fairness, robustness, and explainability in AI are some of the key cornerstones of trustworthy AI. Through its open source projects, IBM and IBM Research bring together the developer, data science and research community to accelerate the pace of innovation and instrument trust into AI.
Explainable AI with H2O Driverless AI's MLI moduleMartin Dvorak
My H2O.ai Prague meetup presentation on explainable AI, model debugging, strategies to identify security vulnerabilities and undesired ML biases + explanation how Driverless AI Machine Learning Interpretability module helps to mitigate aforementioned problems.
Simple Measures, Big Results: Measuring Program Impact DataTechSoup
Simple Measures, Big Results: How to Collect, Analyze, and Share Program Impact Data
Webinar:
Simple Measures, Big Results: How to Collect, Analyze, and Share Program Impact Data
Simple Measures, Big Results: How to Collect, Analyze, and Share Program Impact Data
Tuesday, May 28
11 a.m. Pacific Time
Mention "evaluation" to a nonprofit leader and it may conjure up visions of complex measures, expensive consultants, privacy concerns, and high overhead. Yet there are lightweight, empirically sound outcome measures that can be collected, analyzed, and shared by virtually any organization. In this webinar, we will walk through selecting, collecting, and analyzing such measures and then creating dynamic dashboards in Power BI to share the results.
Brief summary of how the law and legal practice may be affected by the ris of AI and autonomous cars, robots, etc - with a look at what harms or biases may result and how law and the market might try to solve those problems.
Algorithms are taking control of our information rich world. As the twin sibling to Big Data, increasingly they decide how society views us via constructed profiles (as criminals? as terrorists? as rich or poor consumers?); what we see as important, newsworthy, cool or profitable (eg Twitter trending topics, automated stock selling, Amazon recommendations, BBC website top news topics etc); and indeed what we see at all as algorithms are increasingly used to filter our illegal or undesirable content as tools of public policy. Algorithms are peceived by virtue of their automation as neutral, objective and fair, unlike human decision makers - yet evidence increasingly shows the opposite - eg a series of legal complaints assert that Google games its own search results to promote its own economic interests and demote those of competitors or annoyances; while in the defamation field, French, German and Italian courts have decided that algorithmically generated autosuggests in search can be libellous (eg "Bettina Wolf prostitute"). . This paper asks if any legal remedies do or should exist to *audit* proprietary algorithms , given their importance, and asks if one way forward might be via existing and future subject access rights to personal data in EU data protection law. The transformation of these rights as proposed in the draft Data Protection Regulation is not however hopeful.
ASE Keynote 2022: From Automation to Empowering Software Developers Margaret-Anne Storey
Machines today can write software, compose music, create art, predict events, and listen and learn from humans. Notably, automation also plays an essential role in high performing software development teams by automating tasks and improving developer productivity. But automation can’t (yet) replace human imagination and the intelligence that arises when multiple great minds work together to solve the complex problems that are inherent in software and systems design. In this talk, we will review how automation in modern software development has evolved and the many benefits it has brought. We will then explore how a deeper understanding of the developer experience points to untapped possibilities for innovating automation for software engineering, focusing on how they can:
support developers to manage the cognitive complexity of today’s systems,
ease and enhance collaboration by speeding up feedback loops, and
help developers to get in and stay in a state of flow when developing.
We will conclude by discussing how we can measure the impact of new innovations on the developer experience, and how doing so will drive actionable change and empower developers to do their best work joyfully.
The Robot and I: How New Digital Technologies Are Making Smart People and Bus...Cognizant
Our latest study shows that when enterprise robots are applied to automating core business processes, they can extend the creative problem-solving capabilities and productivity of human beings and deliver superior business results.
Why and How Modern IT Departments Will Use AI in 2018 SymphonySummit
This paper focuses on an IT Operations Management and IT Service Management use-case perspective, with a number of key definitions helpful in providing a common basis for the potential use cases offered here in
How AI and ML Can Optimize the Supply Chain.pdfGlobal Sources
Artificial intelligence (AI) and machine learning (ML) were already buzzwords in the technology and manufacturing spheres before the pandemic upended the global supply chain. Ironically, with the disruption from the health crisis the push toward translating them into reality has become stronger.
Although there is still a huge gap between “ambition and execution,” as industry analysts put it, the AI and ML promises of higher productivity and better resilience cannot be ignored. A few have started adopting the technologies and many more are expected to follow and reap the benefits of a highly integrated system in the coming years.
Global Sources‘ latest e-book, How Artificial Intelligence & Machine Learning Can Optimize the Supply Chain, explores the potential benefit of technology on key areas, such as data collection and analysis, supply chain optimization, cost reduction, forecasting and planning. It offers a roadmap to augmentation and automation, and how this will help speed up operations, boost efficiency and build resilience. The book also covers challenges posed by the adoption of artificial intelligence and machine learning in current setups, and how they can be overcome.
Read more about the advantages of adopting a highly integrated system using artificial intelligence and machine learning.
Download here to get a free copy of How Artificial Intelligence & Machine Learning Can Optimize the Supply Chain.
Updated vesion of my talk from 2013 as given in March 2016.
Coves the basics of why algorithmic governance may be problematic for users and society and suggests some legal remedies for these problems including competition law and defamation law.
Challenges and Solution for Artificial Intelligence in Cybersecurity of the USAvishal dineshkumar soni
The development of Information Technology can make a computer to act and think like humans. AI is an exceptional aspect of information technology that requires the event of a machine that reacts and works as a mind of the human. The artificial intelligence' key features include the human senses' analogy. The system is capable of recognizing touch and speeches as features that are placed within the system for running the normal life situation's potential activities without the assistance of humans. Artificial intelligence, however, is the intelligence' agents' study that takes the environment's condition and achieves its goal successfully. The majority of the systems in the computing World are built for serving the purposes as per the situation's nature with the unique features' application from the human's aspects' natural existing. Artificial intelligence is generally an associate of the humans that apply problem-solving techniques and learning for understanding activities' high levels in operation of the human-inspired elements, decision-making, and emotional cycle. As opposed to human intelligence, artificial intelligence is machine-based intelligence. This research paper is aimed at evaluating the current challenges related to artificial intelligence for cybersecurity in the United States. The research paper will propose the innovative solution for Artificial Intelligence in cybersecurity of the USA.
Ethical AI: Establish an AI/ML Governance framework addressing Reproducibility, Explainability, Bias & Accountability for Enterprise AI use-cases.
Presentation on “Open Source Enterprise AI/ML Governance” at Linux Foundation’s Open Compliance Summit, Dec 2020 (https://events.linuxfoundation.org/open-compliance-summit/)
Full article: https://towardsdatascience.com/ethical-ai-its-implications-for-enterprise-ai-use-cases-and-governance-81602078f5db
Artificial Intelligence - intersection with compliance. How AI principles work with compliance principles around data protection. AI and Compliance. AI - SYSC 13.7 - FCA Compliance. AI and regulation. AI and FCA regulation. AI and ICO regulation.
AI and the Professions: Past, Present and FutureWarren E. Agin
A presentation to the National Conference of Lawyers and CPA’s - December 11, 2017. Describes the history of AI, explains why the legal and accounting professions are at a turning point, and predicts changes in the professions from AI adoption.
Analytic Law, LLC helps law firms and departments discover how to solve legal problems using analytic techniques, including data analytics, prediction systems, machine learning, game theory and behavioral economics.
Shift Remote: AI: Behind the scenes development in an AI company - Matija Ili...Shift Conference
Creating any type of company takes enormous amounts of effort, hard work, and persistence. Let alone an Artificial Intelligence company. As we can assure you, it will take a lot more than the above and adding just a team of brilliant AI scientists to build complex real-world AI solutions. In this talk, we will show you the crucial roles of development teams in a high-performing Artificial Intelligence company.
Shift Remote: AI: Smarter AI with analytical graph databases - Victor Lee (Ti...Shift Conference
Today's analytical graph databases are taking organizations to another level by connecting all their data, representing knowledge better, and obtaining answers to deeper questions in real time. These benefits extend to the world of machine learning and AI. This talk will illustrate several ways in which graph databases and graph analytics can deliver smarter AI:
1. Unsupervised learning with graph algorithms.
2. Feature extraction and enrichment with graph patterns.
3. In-database ML techniques for graphs
Shift Remote: DevOps: Devops with Azure Devops and Github - Juarez Junior (Mi...Shift Conference
This talk explores how to modernize your infrastructure with Microsoft Azure DevOps and GitHub, the cultural transformation required to get there end, the opportunities that arise from such a shift.
Shift Remote: DevOps: Autodesks research into digital twins for AEC - Kean W...Shift Conference
Autodesk Research has been exploring the intersection of BIM (Building Information Modeling) and Internet of Things (IoT) for the last decade. Project Dasher (http://dasher360.com) integrates sensor data with model data from Autodesk’s Forge platform to contextualize IoT data in 3D. This session will look at the history of Dasher, as well as how some of its capabilities are now being integrated into Forge, allowing web developers to build digital twins integrating real-world performance data with 3D geometry.
Shift Remote: DevOps: When metrics are not enough, and everyone is on-call - ...Shift Conference
Is "Observability" just another term to make DevOps cool again? Let's talk about why observability is not just a term, and not just monitoring. This session explores how modern applications are driving a different approach to operations and changing the way companies think about their on-call strategy. Sustainable DevOps means application management plans keep pace with application velocity.
Shift Remote: DevOps: Modern incident management with opsgenie - Kristijan L...Shift Conference
Opsgenie is a cloud-based service for dev & ops teams, providing reliable alerts, on-call schedule management and escalations. Opsgenie monitors and reports on the entire life cycle of a ticket, allowing operations personnel to analyze incidents and outages and identify areas for improvement. Are you ready to improve your incident and alert management systems?
Shift Remote: DevOps: Gitlab ci hands-on experience - Ivan Rimac (Barrage)Shift Conference
DevOps tooling and practices are changing every day. Nowadays you can standardize and automate your infrastructure, application delivery, and policies as code. You’ll be ready to adapt quickly—helping your team do their best work faster while staying competitive. Gitlab CI is a modern tool which can help you manage, package, configure and much more with your apps. You can get your infrastructure to play very nice with it. It is designed to improve software development productivity. Topics we will be covering in a talk are pipeline configuration, DAG, components, controls, job configuration.
Shift Remote: DevOps: DevOps Heroes - Adding Advanced Automation to your Tool...Shift Conference
DevOps is more than the process of automating your CI/CD pipelines to generate code and deployment artifacts for production. It's also about organizational change and integration of many subtle processes that help you to deliver applications seamlessly from development to production through your operations.Let's unlock the power of process integration with a getting started walk through of a free online hands-on workshop that adds advanced automation to your devops toolbox. We'll take you through the integration of an organizational process as part of your DevOps strategy. Step-by-step you'll learn how to build a data model, create an automated process, integrate user approval tasks, and more using modern open source process automation tooling. No experience in automation integration is required. Join us for a short session that helps you in adding a new tool to your devops toolbox.
Shift Remote: Game Dev - Localising Mobile Games - Marta Kunic (Nanobit)Shift Conference
Nanobit is famous for its interactive story games. In the beginning we created those games only in English and without support for any other language. There were many people who were not able to play them because they didn’t speak English and couldn’t understand anything. In this talk you will find out how we managed to translate our games and increase the number of our players more than twice.
Shift Remote: Game Dev - Challenges Introducing Open Source to the Games Indu...Shift Conference
As many of us already know - open source is highly prevalent in the wider technical landscape. However, in the games industry, it is far less so. At Google we’ve been working on a variety of open source projects for game developers, and have come across several challenges that are fairly unique to the games industry -- so let’s take a look at them, and some proposed solutions that we’ve come up with to help you in that area!
Shift Remote: Game Dev - Ghost in the Machine: Authorial Voice in System Desi...Shift Conference
It’s easy to see an agenda in a piece of narrative work, or to see a criticism of an issue in a digital painting,but can math be an expression of our view of the world? Can dynamics of the systems say how we feel about the world? I strongly believe they can, so let me show you how, and why.
Shift Remote: Game Dev - Building Better Worlds with Game Culturalization - K...Shift Conference
With over 30 years of experience in digital media as a geographer and culturalization strategist, and 27+ years in games, Kate Edwards has been involved in the creation of many games, including major titles such as Halo, Fable, Age of Empires, Mass Effect, Call of Duty, and many, many others. She has seen it all when it comes to geopolitical and cultural issues that are often overlooked in content creation and can negatively affect the ability of content to be accepted overseas, and she has seen designers miss opportunities to create more robust worlds that engage the players from diverse cultural backgrounds. Kate will discuss the field of content culturalization and how it can assist game creators with building better game worlds that account for a wider range of cultural and geopolitical considerations.
Shift Remote: Game Dev - Open Match: An Open Source Matchmaking Framework - J...Shift Conference
Developers want to focus on connecting players together for online multiplayer game sessions, not gaming infrastructure. Google has worked alongside developers and publishers to create Open Match to solve this issue. This open source matchmaking framework provides developers with tools to build a scalable matchmaker without the overbearing tasks of managing their infrastructure when hit with a sudden surge of players. In this talk, we will explore Open Match, its features, and the benefits of building Open Match in open source.
Shift Remote: Game Dev - Designing Inside the Box - Fernando Reyes Medina (34...Shift Conference
In game development, resources are limited. For any creative endeavor, this might seem very restrictive and counterintuitive. In this talk we’ll explore how constraints can be used to our advantage, leading to designing and creating better and more unique products.
Shift Remote: Mobile - Efficiently Building Native Frameworks for Multiple Pl...Shift Conference
In this talk you will learn about some of the approaches that you can take to effectively design and build native frameworks that behave consistently across platforms while leveraging each platform's native strengths and APIs. We'll go over the process all the way from designing a feature, to writing a feature specification, to a passing test suite for every platform.
Shift Remote: Mobile - Devops-ify your life with Github Actions - Nicola Cort...Shift Conference
What's the first thing you should do when starting a new project...? Setup a good CI system! With Github Actions you can do it in a couple of seconds. You can easily setup a workflow to build your project, test it on different machines, and deploy the results. In this talk we're going to see how you can setup a simple Github Action for your repository and start enjoying it right after.
Shift Remote: WEB - GraphQL and React – Quick Start - Dubravko Bogovic (Infobip)Shift Conference
Have you ever wondered if there's a way to create simple real time apps? Were you ever tired of creating numerous APIs for your CRUD operations or just some simple aggregated data? There is a simple, fast way to do just that, GraphQL. Well look into what GraphQL can do for us, how to create a simple opensource GraphQL server on top of Postgres and how to use the data in our front end apps.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
3. “The automation of our jobs is
the central challenge facing us
today. Any politician not
addressing it is failing the
American people.”
- Andrew Yang
(2020 Democratic US Presidential Candidate)
4. “You just don’t get it. In China,
the robots are going to come
just in time.”
- Daniel Kahneman
(2002 Nobel Memorial Prize in Economic Sciences)
7. Robotic Process Automation is the technology
that allows anyone today to configure computer
software, or a “robot” to emulate and integrate the
actions of a human interacting within digital
systems to execute a business process.
- UIPath
9. cut copy
excel quick
books
What is
RPA?
insert
chrome
quick
books
excel copy insert
Copy the values into quickbooks from
the excel sheet
10. cut copy
excel quick
books
What is
RPA?
insert
chrome
quick
books
excel copy insert
Copy the values into quickbooks from
the excel sheet
Attended Unattended
12. “[IPA is] a suite of business-process
improvements and next-generation tools that
assists the knowledge worker by removing
repetitive, replicable, and routine tasks… IPA
mimics activities carried out by humans and,
over time, learns to do them even better.”
- Berruti et al (McKinsey)
13. Example Use Cases
Contract Review
Review incoming or
archival contracts,
extracting and
analyzing key phrases
of particular business
interest
Invoice Processing
Match incoming
invoices to
corresponding
purchase orders by
extracting key data
fields
Corporate Inbox
Classify and route
incoming documents
from a shared inbox
(e.g. support@...) to
relevant personnel
Form Extraction
Extract a known set of
mixed-type fields from
documents where
page location is
critical to
interpretation (e.g.
W2s)
14. RPA
Copy the values into quickbooks from
the excel sheet
Automatically enter the same
information into several applications
IPA
Check the data protections on this
contract
Check that all of the documents in a
loan approval are in good order
23. GDPR “Explainability”
“have the right not to be subject to a
decision based solely on automated
processing”
“[Disclosure of] the existence of
automated decision-making,
including profiling”
“meaningful information about
the logic involved, as well as the
significance and the envisaged
consequences of such processing
for the data subject”
“safeguards [must] include quality
assurance checks, algorithmic
auditing, independent third-party
auditing, and more”
“ongoing testing and feedback into
an algorithmic decisionmaking system
to prevent errors, inaccuracies, and
discrimination on the basis of
sensitive (“special category”) data
[including, but not limited to] race,
ethnic origin, political opinion, [and]
religion.”
Articles 13, 14, 15 Article 22 Recital 71
25. Lesson 3: Existing Process
“If you refuse to trust decision-making to
something whose process you don’t understand,
then you should fire all your human workers,
because no one knows how the brain (with its
hundred billion neurons!) makes decisions.”
- Cassie Kozyrkov
(Google Chief Decision Intelligence Officer)
29. Conclusion
The 4th Industrial Revolution is well underway
Despite the advances in Deep Learning, IPA and
process automation generically is very nascent
Humans and computers cooperate much better
than they compete.