With the broader adoption of digital technologies and AI, organisations face the emerging risks of AI, the unfamiliar, and the intensified risk of cybersecurity, the familiar. AI and cybersecurity are intertwined, but risk silos are often created when they are dealt with at the technology and governance levels. This talk will explore the interactions between responsible AI and cybersecurity risks via industry case studies. It will show how we can break down the risk silos and use emerging trust-enhancing technologies, architecture and end-to-end software engineering/DevOps practices to connect the two worlds and uplift the risk management posture for both.
“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
🔹How will AI-based content-generating tools change your mission and products?
🔹This complimentary webinar [ON-DEMAND] explores multiple use cases that drive adoption in their early adopter customer base to provide product leaders with insights into the future of generative AI-powered businesses, and the potential generative AI holds for driving innovation and improving business processes.
Today, I will be presenting on the topic of
"Generative AI, responsible innovation, and the law."
Artificial Intelligence has been making rapid strides in recent years,
and its applications are becoming increasingly diverse.
Generative AI, in particular, has emerged as a promising area of innovation, the potential to create highly realistic and compelling outputs.
Global Governance of Generative AI: The Right Way ForwardLilian Edwards
AI regulation has been a hot topic since the rise of machine learning (ML) in the “big data” era, but generative AI or “foundation models” tools like ChatGPT, DALL-E 2(now 3) and CoPilot, ike ML before them, may create serious societal risks, including embedding and outputting bias; generating fake news, illegal or harmful content and inadvertent “hallucinations”; infringing existing laws relating eg to copyright and privacy; as well as environmental, competition and workplace concerns.
Many nations are now considering regulation to address these worries, and can draw on a number of basic and hybrid models of governance. This paper canvasses models of mandatory comprehensive legislation (where the EU AI Act hopes to place itself as a gold standard model); vertical mandatory legislation (where China has quietly taken a lead); adapting existing law (see the many copyright lawsuits underway); and voluntary “soft law” such as codes of ethics, “blueprints”, or industry guidelines. Both the domestic and international regulatory scenes for AI are also increasingly politicised as the rise of "AI safety" hype shows. Against this backdrop what choices should smaller countries such as the UK and Australia make? will international harmonisation lead to a race to the top as with the GDPR, or the bottom - rule by tech for tech?
Leveraging Generative AI & Best practicesDianaGray10
In this event we will cover:
- What is Generative AI and how it is being for future of work.
- Best practices for developing and deploying generative AI based models in productions.
- Future of Generative AI, how generative AI is expected to evolve in the coming years.
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
The deep learning AI revolution has been sweeping the world for a decade now. Deep neural nets are routinely used for tasks like translation, fraud detection, and image classification. PwC estimates that they will create $15.7 trillion/year of value by 2030. But most current networks are "discriminative" in that they directly map inputs to predictions. This type of model requires lots of training examples, doesn't generalize well outside of its training set, creates inscrutable representations, is subject to adversarial examples, and makes knowledge transfer difficult. People, in contrast, can learn from just a few examples, generalize far beyond their experience, and can easily transfer and reuse knowledge. In recent years, new kinds of "generative" AI models have begun to exhibit these desirable human characteristics. They represent the causal generative processes by which the data is created and can be compositional, compact, and directly interpretable. Generative AI systems that assist people can model their needs and desires and interact with empathy. Their adaptability to changing circumstances will likely be required by rapidly changing AI-driven business and social systems. Generative AI will be the engine of future AI innovation.
“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
🔹How will AI-based content-generating tools change your mission and products?
🔹This complimentary webinar [ON-DEMAND] explores multiple use cases that drive adoption in their early adopter customer base to provide product leaders with insights into the future of generative AI-powered businesses, and the potential generative AI holds for driving innovation and improving business processes.
Today, I will be presenting on the topic of
"Generative AI, responsible innovation, and the law."
Artificial Intelligence has been making rapid strides in recent years,
and its applications are becoming increasingly diverse.
Generative AI, in particular, has emerged as a promising area of innovation, the potential to create highly realistic and compelling outputs.
Global Governance of Generative AI: The Right Way ForwardLilian Edwards
AI regulation has been a hot topic since the rise of machine learning (ML) in the “big data” era, but generative AI or “foundation models” tools like ChatGPT, DALL-E 2(now 3) and CoPilot, ike ML before them, may create serious societal risks, including embedding and outputting bias; generating fake news, illegal or harmful content and inadvertent “hallucinations”; infringing existing laws relating eg to copyright and privacy; as well as environmental, competition and workplace concerns.
Many nations are now considering regulation to address these worries, and can draw on a number of basic and hybrid models of governance. This paper canvasses models of mandatory comprehensive legislation (where the EU AI Act hopes to place itself as a gold standard model); vertical mandatory legislation (where China has quietly taken a lead); adapting existing law (see the many copyright lawsuits underway); and voluntary “soft law” such as codes of ethics, “blueprints”, or industry guidelines. Both the domestic and international regulatory scenes for AI are also increasingly politicised as the rise of "AI safety" hype shows. Against this backdrop what choices should smaller countries such as the UK and Australia make? will international harmonisation lead to a race to the top as with the GDPR, or the bottom - rule by tech for tech?
Leveraging Generative AI & Best practicesDianaGray10
In this event we will cover:
- What is Generative AI and how it is being for future of work.
- Best practices for developing and deploying generative AI based models in productions.
- Future of Generative AI, how generative AI is expected to evolve in the coming years.
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
The deep learning AI revolution has been sweeping the world for a decade now. Deep neural nets are routinely used for tasks like translation, fraud detection, and image classification. PwC estimates that they will create $15.7 trillion/year of value by 2030. But most current networks are "discriminative" in that they directly map inputs to predictions. This type of model requires lots of training examples, doesn't generalize well outside of its training set, creates inscrutable representations, is subject to adversarial examples, and makes knowledge transfer difficult. People, in contrast, can learn from just a few examples, generalize far beyond their experience, and can easily transfer and reuse knowledge. In recent years, new kinds of "generative" AI models have begun to exhibit these desirable human characteristics. They represent the causal generative processes by which the data is created and can be compositional, compact, and directly interpretable. Generative AI systems that assist people can model their needs and desires and interact with empathy. Their adaptability to changing circumstances will likely be required by rapidly changing AI-driven business and social systems. Generative AI will be the engine of future AI innovation.
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
AI Basic, AI vs Machine Learning vs Deep Learning, AI Applications, Top 50 AI Game Changer Solutions, Advanced Analytics, Conversational Bots, Financial Services, Healthcare, Insurance, Manufacturing, Quality & Security, Retail, Social Impact, and Transportation & Logistics
* "Responsible AI Leadership: A Global Summit on Generative AI"
*April 2023 guide for experts and policymakers
* Developing and governing generative AI systems
* + 100 thought leaders and practitioners participated
* Recommendations for responsible development, open innovation & social progress
* 30 action-oriented recommendations aim
* Navigate AI complexities
Generative AI: Responsible Path forward, a presentation conducted during DataHour webinar series by Analytics Vidhya and attended by more than a hundred data scientists and AI experts from around the world. The presentation address the importance of AI ethics and the development of responsible AI governance at tech firms to help mitigate AI risks and ethical issues.
Responsible Data Use in AI - core tech pillarsSofus Macskássy
In this deck, we cover four core pillars of responsible data use in AI, including fairness, transparency, explainability -- as well as data governance.
How can we use generative AI in learning products? A rapid introduction to generative AI. Presented at ED Games Expo 2023 at the U.S. Department of Education, September 22, 2023.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveHuahai Yang
Generative AI: Past, Present, and Future – A Practitioner's Perspective
As the academic realm grapples with the profound implications of generative AI
and related applications like ChatGPT, I will present a grounded view from my
experience as a practitioner. Starting with the origins of neural networks in
the fields of logic, psychology, and computer science, I trace its history and
align it within the wider context of the pursuit of artificial intelligence.
This perspective will also draw parallels with historical developments in
psychology. Against this backdrop, I chart a proposed trajectory for the future.
Finally, I provide actionable insights for both academics and enterprising
individuals in the field.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we protect the privacy of users when building large-scale AI based systems? How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of privacy-preserving AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
GENERATIVE AI, THE FUTURE OF PRODUCTIVITYAndre Muscat
Discuss the impact and opportunity of using Generative AI to support your development and creative teams
* Explore business challenges in content creation
* Cost-per-unit of different types of content
* Use AI to reduce cost-per-unit
* New partnerships being formed that will have a material impact on the way we search and engage with content
Part 4 of a 9 Part Research Series named "What matters in AI" published on www.andremuscat.com
An Introduction to Generative AI - May 18, 2023CoriFaklaris1
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(This updated version builds on our previous deck: slideshare.net/LoicMerckel/intro-to-llms.)
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
AI Basic, AI vs Machine Learning vs Deep Learning, AI Applications, Top 50 AI Game Changer Solutions, Advanced Analytics, Conversational Bots, Financial Services, Healthcare, Insurance, Manufacturing, Quality & Security, Retail, Social Impact, and Transportation & Logistics
* "Responsible AI Leadership: A Global Summit on Generative AI"
*April 2023 guide for experts and policymakers
* Developing and governing generative AI systems
* + 100 thought leaders and practitioners participated
* Recommendations for responsible development, open innovation & social progress
* 30 action-oriented recommendations aim
* Navigate AI complexities
Generative AI: Responsible Path forward, a presentation conducted during DataHour webinar series by Analytics Vidhya and attended by more than a hundred data scientists and AI experts from around the world. The presentation address the importance of AI ethics and the development of responsible AI governance at tech firms to help mitigate AI risks and ethical issues.
Responsible Data Use in AI - core tech pillarsSofus Macskássy
In this deck, we cover four core pillars of responsible data use in AI, including fairness, transparency, explainability -- as well as data governance.
How can we use generative AI in learning products? A rapid introduction to generative AI. Presented at ED Games Expo 2023 at the U.S. Department of Education, September 22, 2023.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveHuahai Yang
Generative AI: Past, Present, and Future – A Practitioner's Perspective
As the academic realm grapples with the profound implications of generative AI
and related applications like ChatGPT, I will present a grounded view from my
experience as a practitioner. Starting with the origins of neural networks in
the fields of logic, psychology, and computer science, I trace its history and
align it within the wider context of the pursuit of artificial intelligence.
This perspective will also draw parallels with historical developments in
psychology. Against this backdrop, I chart a proposed trajectory for the future.
Finally, I provide actionable insights for both academics and enterprising
individuals in the field.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we protect the privacy of users when building large-scale AI based systems? How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of privacy-preserving AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
GENERATIVE AI, THE FUTURE OF PRODUCTIVITYAndre Muscat
Discuss the impact and opportunity of using Generative AI to support your development and creative teams
* Explore business challenges in content creation
* Cost-per-unit of different types of content
* Use AI to reduce cost-per-unit
* New partnerships being formed that will have a material impact on the way we search and engage with content
Part 4 of a 9 Part Research Series named "What matters in AI" published on www.andremuscat.com
An Introduction to Generative AI - May 18, 2023CoriFaklaris1
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(This updated version builds on our previous deck: slideshare.net/LoicMerckel/intro-to-llms.)
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
SGCI - Science Gateways - Technology-Enhanced Research Under Consideration of...Sandra Gesing
Science gateways - also called virtual research environments or virtual labs - allow science and engineering communities to access shared data, software, computing services, instruments, and other resources specific to their disciplines and use them also in teaching environments. In the last decade mature complete science gateway frameworks have evolved such as HUBzero and Galaxy as well as Agave and Apache Airavata. Successful implementations have been adapted for several science gateways, for example, the technologies behind the science gateways CIPRES, which is used by over 20.000 users to date and serves the community in the area of large phylogenetic trees. Lessons learned from the last decade include that approaches should be technology agnostic, use standard web technologies or deliver a complete solution. Independent of the technology, the major driver for science gateways are the user communities and user engagement is key for successful science gateways. The US Science Gateways Community Institute (SGCI), opened in August 2016, provides free resources, services, experts, and ideas for creating and sustaining science gateways. It offers five areas of services to the science gateway developer and user communities: the Incubator, Extended Developer Support, the Scientific Software Collaborative, Community Engagement and Exchange, and Workforce Development. The talk will give an introduction to science gateways, examples for science gateways and an overview on the services offered by the SGCI to serve user communities and developers for creating successful science gateways.
44CON 2014 - Security Analytics Beyond Cyber, Phil Huggins44CON
44CON 2014 - Security Analytics Beyond Cyber, Phil Huggins
A quick summary of the current state of big data technology and data science approaches used in cyber / network defender security analytics including summary use cases, a walk through of a reference architecture and breakdown of the required skills. Focus is on the knowledge needed to run a proof of concept and establish a programme for early benefits. Will then also include a view on the future of extending the platforms and capabilities of security analytics to cover performance metrics and data-driven security management approaches.
Software Architecture for Foundation Model-Based SystemsLiming Zhu
With the successful implementation of Large Language Models (LLMs) in chatbots like ChatGPT, there is growing attention on foundation models, which are anticipated to serve as core components in the development of future AI systems. Yet, systematic exploration into the design of foundation model-based systems, particularly concerning risk management, trust, and trustworthiness, remains limited. In this talk, I propose the challenges and initial approaches in both architecting LLM-based systems and how LLM systems have an impact on software engineering. I point to some initial directions such as architecting as a process of understanding (rather than designing/building), setting and trade-offing guardrails (rather than quality attributes), and radical observability.
Responsible/Trustworthy AI in the Era of Foundation Models Liming Zhu
The emergence of large language models (LLM) such as GPT4 has garnered significant attention, placing foundation models at the forefront of AI systems. However, integrating foundation models raises concerns regarding responsible/trustworthy AI due to their opaque nature and rapidly moving capability boundaries. This talk addresses these challenges in the context of industry and defence and proposes a pattern-oriented reference architecture for responsible AI/trustworthy design in foundation model-based systems. It explores the evolution of AI systems architecture, transitioning from a many-model/module architecture to a increasingly monolithic architecture centered around foundation models.
ICSE23 Keynote: Software Engineering as the Linchpin of Responsible AILiming Zhu
From humanity’s existential risks to safety risks in critical systems to ethical risks, responsible AI, as the saviour, has become a massive research challenge with significant real-world consequences. However, achieving responsible AI remains elusive despite the plethora of high-level ethical principles, risk frameworks and progress in algorithmic assurance. In the meantime, software engineering (SE) is being upended by AI, grappling with building system-level quality and alignment from inscrutable ML models and code generated from natural language prompts. The upending poses new challenges and opportunities for engineering AI systems responsibly. This talk will share our experiences in helping the industry achieve responsible AI systems by inventing new SE approaches. It will dive into industry challenges (such as risk silos and principle-algorithm gaps) and research challenges (such as lack of requirements, emerging properties and inscrutable systems) and make the point that SE is the linchpin of responsible AI. But SE also requires some fundamental rethinking - shifting from building functions to AI systems to discovering and managing emerging functions from AI systems. Only by doing so can SE take on critical new roles, from understanding human intelligence to building a thriving human-AI symbiosis.
Challenges in Practicing High Frequency Releases in Cloud Environments Liming Zhu
Talk at RELENG 2014
Full paper: http://www.nicta.com.au/pub?doc=7925
The continuous delivery trend is dramatically shortening release cycles from months into hours. Applications with high frequency releases often rely heavily on automated deployment tools using cloud infrastructure APIs. We report some results from experiments on reliability issues of cloud infrastructure and trade-offs between using heavily-baked and lightly-baked images. Our experiments were based on Amazon Web Service (AWS) OpsWorks APIs and configuration management tool Chef. As a result of our experiments, we then propose error handling practices that can be included in tailor-made continuous deployment facilities.
More related info at our DevOps book http://www.ssrg.nicta.com.au/projects/devops_book/
Dependable Operation - Performance Management and Capacity Planning Under Con...Liming Zhu
Talk at http://www.cmga.org.au/ Meet up
Modern large-scale applications experience sporadic changes due to operational activities such as upgrade, redeployment, on-demand scaling and interferences from other simultaneous operations. This poses new challenges in system monitoring, capacity planning, performance management, error detection and diagnosis. For example, the traditional anomaly-detection-based techniques are less effective during the “sporadic” operation period as a wide range of legitimate changes confound the situation and make performance baseline establishment for “normal” operation difficult. The increasing frequency of these sporadic operations (e.g. due to continuous deployment) is exacerbating the problem. In this talk, we will introduce a number of ongoing research activities at NICTA addressing these issues. For example, we propose the Process Oriented Dependability (POD) approach, an approach that explicitly models these sporadic operations as processes and uses the process context to filter logs, traverse fault trees and conduct adaptive monitoring.
Modelling and Analysing Operation Processes for Dependability Liming Zhu
The 43rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN13) talk slides. June 27th, 2013. Full text here: http://www.nicta.com.au/pub?doc=7031
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Responsible AI & Cybersecurity: A tale of two technology risks
1. Australia’s National Science Agency
Responsible AI &
Cybersecurity
A tale of two
technology risks
Liming Zhu
Research Director, CSIRO’s Data61
Chair, Blockchain & Distributed Ledger
Technology, Standards Australia
Expert on working groups:
ISO/IEC JTC 1/WG 13 Trustworthiness
ISO/IEC JTC 1/SC 42/WG 3 - Artificial intelligence – Trustworthiness
2. CSIRO’s Data61: Australia’s Largest Data & Digital
Innovation R&D Organisation
1000+
talented people
(including
affiliates/students)
Home of
Australia’s
National AI
Centre
Data61
Generated
18+ Spin-outs
130+ Patent
groups
200+
Gov &
Corporate
partners
Facilities
Mixed-Reality Lab
Robotics Inno. Centre
AI4Cyber HPC Enclave
300+
PhD students
30+
University collaborators
Responsible
Tech/AI
Privacy & RegTech
Engineering & Design of
AI Systems
Resilient &
Recovery Tech
Cybersecurity
Digital Twin
Spark (bushfire) toolkit
2 |
3. § More sources & types from public & partners
§ Intergovernmental data sharing
§ Access and use of sensitive data from another
organization/country
§ Privacy but also commercial and other sensitivity
§ Data analytics over encrypted data -
”sharing/use without access”
§ Open data/innovation (anonymized or
desensitized data)
Trend: Value Arises from Data Sharing & Joint Analytics
Data sharing, Data-as-a-service & AI/ML/Model-as-a-Service
3 |
4. Trend: Regulation/Ethic Overlay
Data Economy: Balancing Innovation & Regulation Burden
Legislations
• GDPR, EU AI Act
• Australia
• AU Privacy Act
• Data Breach Notification Scheme
• Consumer Data Right (CDR): Open Banking, Energy..
Increasing Concerns
• Cybersecurity: Data (increasingly integrity) and AI
• Responsible AI – Trust Data/AI-powered Service
- Fairness, Accountability, Transparency, Privacy, Civil liberties…
- Rights to explanation and redress
- Right to be forgotten
4 |
5. Tech Trend: Trust Architecture- AI and Security
5 |
Systems Operating in the Context of
• Zero Trust Environment
• Trustless Machines/Protocols
• Distributed Trust/Blockchain
• Distributed Infrastructure
• Data, Compute/Code, Models
6. Distributed Trust Architecture in AI Engineering/Systems
6 |
• Entanglements, Correction Cascades,
Undeclared Customers
• Data (Model, Code, Config..) Dependencies
• Anti-patterns
• Debt: Abstraction, Reproducibility, Process
Management, Culture
Circa 2014-15 2020-2021/Today
• ”federated data collection, storage, model,
and infrastructure”
• “co-design and co-versioning”…
• implication of foundation models
9. • Human, societal and environmental
wellbeing
• Human-centred values
• Fairness
• Privacy protection and security
• Reliability and safety
• Transparency and explainability
• Contestability
• Accountability
Australian AI Ethics Principles
Security is part of it
9 |
10. • Different stakeholder interests & complex landscape of risk assessment
• Industry level vs. org level vs. team level
Challenge: Diverse stakeholders and risk landscape
10 |
11. • Risk silos competing for resources
• CISO vs. CIO: security team vs. Dev team
• Board risk committees: financial, legal, reputation
– + HSE + privacy + security + ethics + AI + ….
• Limited connections between risks assessed separately
• Forced and meaningless roll-up
• Risk mgt perceived as a barrier – a separate thing dreaded doing
Challenge: Competing risk silos
11 |
12. • Each org has existing and different governance/risk approaches
– Shortage of expertise to assess new risks e.g. AI risks
– No capacity to examine each project deeply
– Checklist, conversations, info sheet
– Not underpinned by formal or technical approaches
• Treating risk analysis as hazard/threat analysis, omitting
– System vulnerability, exposure risks and response/mitigation risk
Challenge: Risk integration and expertise
12 |
13. • Lift the boat - Solutions that benefit multiple risk management, e.g.
– End-to-end provenance across data, code and AI models
– Control intercepts, federated learning, distributed trust
• Connected risks – meaningful technical trade-offs/mitigation, e.g.
– Patterns with multi-risk consequences and trade-offs
• Whole-of-system risks – meaningful aggregation, e.g.
– Connected patterns across process, governance and product
• Integration with existing processes
– Product development processes & governance processes
– Most efficient use of specialised expertise
Solution Principles: lift the boat, connect the risks
13 |
14. Responsible & Secure AI System
Responsible and Secure (AI) Systems
AI ethics principles
Trusted user interaction
Responsible/
Secure-AI-by-
design
AI pipelines
Accountable
DevSecOps
Non-AI components
Responsible/
Secure data
management
Fair &
secure AI
DevOps
AI components
Multi-level governance
Cybersecurity
14 |
16. • Connect multiple technical risks when possible
• Focus on mitigations that help address multiple risks first
• Then consider single-risk mitigations
• Mitigation/response introduce overlooked new risks - must assess
Connected Risk Assessment
AI4M Operationalising Responsible AI Project: https://research.csiro.au/ai4m/operationalising-responsible-ai/
16 |
18. Pattern template
• Summary
• Type of pattern
• Type of objective
• Target users
• Impacted stakeholders
• Relevant principles
• Context
• Problem
• Solution
• Benefits
• Drawbacks
• Related patterns
• Known uses
https://research.csiro.au/ss/science/projects/responsible-ai-pattern-catalogue/
Pattern Catalogue – extra key info
• In software engineering, a pattern is a
reusable solution to a recurring problem
in a given context
• capture the experience of experts about best
practices
• document in an accessible and structured way
for stakeholders (e.g. developers)
• Pattern catalogue
• a collection of patterns that are related to
some extend
• used together or independently of each other
18 |
26. Analytics/Simulation to Data: Data Airlock
Not Data to Analytics/Simulation
• Analytics/Simulation requests to
data -> Insights back
• No data sharing
• Automated vetting of insights
• Risks mitigated: security, privacy,
emotional harm, accountability…
• Case Studies: Major government
agency
26 |
Data is kept away in vaults.
All analytics models and simulation results are vetted.
27. Trust Architecture at Scale: Consumer-Driven Sharing
Enabling FinTechs including blockchain-based ones
• Consumer Data Right (CDR): Australia’s legislation impacting
consumer data and its services
• Consumers can authorise 3rd parties to access their data
• Currently designated sectors: Banking, Energy…
• Data61’s (Recent) Role
• Setting Architecture/Data API standards
• Security profiles standards
• Trust Architecture Trade-offs
• Trusted gateway vs. peer-to-peer trust
• Trust in Nodes: Processing-only vs. Processing + Use
• Risks mitigated: security, privacy, over-regulation,
accountability, irresponsible data/analytics
https://consumerdatastandards.gov.au
27 |
ACCC Consumer Data Right in Energy Consultation paper:
data access models for energy data, 2019
28. When there are cultural or legislative restrictions
in place to data sharing, consider alternatives!
Federated Model: “Data Co-Ops”
• No centralised data repositories
• Edge AI and Analytics
Scientific Approaches
• Zero-knowledge proofs, homomorphic
encryption, secure-multi-party computation
• Risks mitigated: security, privacy, accountability,
explainability
Trust Architecture: Federated ML/Data Analytics
From limited access to full encryption during use
28 |
Other Case Studies at Data61
• Bank + Telco for fraud analytics
• Two gov departments for joint insights
Other Supported Scenarios
• Innovation in secure transactions
• Access to data by regulators
• Cross-border data flow
29. Use Cases
- keyboard prediction
- browser history recommendation
- visual object detection
- diagnosis and treatment prediction
- drug discovery (across facilities involving IP)
- meta-analysis over distributed medical databases
- augmented reality
More Data61 case studies
• name entity resolution
• fraud/anomaly detection (bank + telco)
• crop yield prediction - federated transfer learning
• IIoT fault detection
More Federated Learning Architecture & Use Cases
Data61 work: SK Lo, Q Lu, L Zhu, HY Paik, X Xu, C Wang: Architectural patterns for the
design of federated learning systems, Journal of Systems and Software (2021)
Data61 work: SK Lo, Q Lu, HY Paik, L Zhu, FLRA: A Reference Architecture for Federated
Learning Systems, European Conference on Software Architecture (2021)
Data61 work: Wei, K., Li, J., Ding, M., Ma, C., Yang, H.H., Farokhi, F., Jin, S., Quek, T.Q.S., Poor,
H.V., 2020. Federated Learning With Differential Privacy: Algorithms and Performance Analysis.
IEEE Transactions on Information Forensics and Security 15, 3454–3469.
29 |
31. Trustworthiness: Model/Data Integrity & Provenance
31 |
Data61 work: X Xu, C. Wang, J. Wang, et. al. “Improving Trustworthiness of AI-
based Dynamic Digital-Physical Parity” , 2021 (submitted)
• Blockchain improves trust in data integrity
and model integrity
• Provenance is the key
32. Trust Architecture Patterns: Privacy-by-Design
32 |
•
Data61 work: Su Yen Chia, Xiwei Xu, Hye-Young Paik, Liming Zhu: Analysing and
extending privacy patterns with architectural context. SAC 2021
GDPR &
Australian Privacy
Principles
33. Safe Data Sharing: Provable Desensitization & Synthetic Data
Quantified risks assessment, mitigation and compliance, synthetic data sets
§ Provably desensitized data sharing/release for joint analytics and simulation
§ Synthetic datasets that balance authenticity and obfustication
§ Quantified risks and mitigation
§ Case Studies: Worked with 30+ Gov agencies
R4: Re-identification Risks Ready-Reckoner
33 |
35. • Knowledge Graphs across AI and security risks
• uses a graph-structured data model or topology to integrate data
• Graphically present semantic relationship between entities
• Responsible/Secure AI Knowledge Graph
• Incorporating unstructured data
• AI ethics principles, security standards, policy documents…
• AI and security incidents…
• Pattern catalogues, online solutions…
• Dark pattern datasets…
• Supplemented with GPT
• …
Our Approach: Automated tools assisting human
35 |
36. Knowledge
provenance and
explainability
• Aspect extraction
• rule based (TOSEM 2022)
• Supervised NER+QA (TOSEM revision)
• Unsupervised clustering (ASE 2021)
• Vulnerability KG
• Four heterogeneous sources (NVD,
IBM X-Force, ExploitDB, Openwall)
• Seven vulnerability aspects
• Link to CWE+CAPEC
• Integrate CVSS classifications
• Add aspect synonyms
• A web interface to access the knowledge
• http://vbom.org/#/home
36 |
38. Integrating user tasks/failures – better test
System KG Construction
Proof-of-Concept Tool Implementation
KG Meta-Model Design
Test Scenario Generation
Bug Reports
Static Part Dynamic Part
Manual Categories
Definition
Manual Action
Definition
Automatic Concept
Extraction
Entity Linking
Configuration files
Step Normalization
Step Splitting
Step Clustering
Scenario Extraction
Soap Opera Test Generation
Relevant Bug Reports Finding Test Scenario Generation
Seed Bug Report Test Scenarios
Static Dynamic
Category
Concept
Action
presentedIn
synonymOf
antonymOf
Step
hasConcept
hasAction
nextStep
actionOn
Expected
Results
Actual
Results
Preconditions
satisfy
leadTo
leadTo
synonymOf Scenario
execute
Cluster
belongTo
actionOn
Constructing a System Knowledge Graph of User Tasks and Failures from Bug Reports to Support Soap Opera
Testing (Su et al., ASE 2022)
38 |
40. KG uses: Dark Pattern- ethical, security and privacy risks
• Dark pattern: a type of user interface designed to trick users into
doing things that they did not mean to do
• disguised ad, preselection, hidden information, trick questions, forced action,
false hierarchy, etc.
40 |
41. KG Uses: Dark Pattern Detector
Knowledge graph +
Natural language processing
Computer vision
Input: a user interface
Output: locate the dark pattern, explain and give examples
• Dark Pattern: Privacy Zuckering
• Description: You are tricked into publicly sharing more information
about yourself than you really intended to.
• Possible Solution: Allow users to disable the permission
• Similar Examples
41 |
42. KG Uses: Supplement AIBOM Generator
• Many organizations procure AI
technologies/solutions from third
parties to build AI systems
• Software Bill of Materials (SBOM):
ensure transparency and security
of software supply chain
• Component name, version, supplier,
dependency relationship, author of
SBOM, timestamp, etc.
• AI/Data BOM
42 |
43. • Despite a struggling tale of two siloed risks: Cybersecurity and AI
• Solution principles
– Lift the boat - solutions that benefit multiple risk management
– Connected risks – meaningful technical trade-offs/mitigation
– Whole-of-system risks – meaningful aggregation.
– Integration with existing processes
• Solutions
– Process/Governance patterns for connected/integrated risk mgt
– Product/Tech patterns for embedding multi-risk mitigations
– KG-based Automated tools to assist humans
For more: https://research.csiro.au/scs/ liming.zhu@data61.csiro.au
Summary: lift the boat, connect the risks
43 |