2. Anti-Trust Policy Notice
● Linux Foundation meetings involve participation by industry competitors, and it is the
intention of the Linux Foundation to conduct all of its activities in accordance with
applicable antitrust and competition laws. It is therefore extremely important that
attendees adhere to meeting agendas, and be aware of, and not participate in, any
activities that are prohibited under applicable US state, federal or foreign antitrust and
competition laws.
● Examples of types of actions that are prohibited at Linux Foundation meetings and in
connection with Linux Foundation activities are described in the Linux Foundation
Antitrust Policy available at http://www.linuxfoundation.org/antitrust-policy. If you have
questions about these matters, please contact your company counsel, or if you are a
member of the Linux Foundation, feel free to contact Andrew Updegrove of the firm of
Gesmer Updegrove LLP, which provides legal counsel to the Linux Foundation.
3. Agenda 20th Feb 2024
OpenChain AI study
group
1) Recap of discussion so far
Scope – how to build trust in
the open source AI supply chain
What are the “compliance artifacts”?
How do we know they can be trusted?
Discuss use cases
Inbound
Deployment internally
Hosting externally
Distributing externally
https://www.openchainproject.org/news/2
024/02/07/openchain-ai-study-group-
north-america-europe-2024-02-06-
recording
2)Workshop
3) Continuing the conversation
4. New Era, but Same Solution
4
outbound
upstream downstream
inbound Training
Policy
Process
Goal: Compliance Artifacts delivered by a consistent process so
that it can be relied upon throughout the supply chain
5. The AI
Regulatory
Framework
● EU AI Act
● China’s
○ Generative AI Measures
○ Deep Synthesis Provisions
○ Algorithm Recommendation Provisions
○ Security Assessment Provisions
○ CAC Testing Requirements
● US
○ Executive Order on AI
○ Blueprint for AI Bill of Rights
● OECD Guidelines for Secure AI system
Development
● […]
Compare: NIST AI Risk Management Framework:
6. Use Cases for Data + Models
6
Dataset
Data
source 1
Data
source n
...
Training Inference
Deployed Environment
AI Model
User
Prompt Response
Back Propagation
Delivery of pre-
trained model
• The AI Model
equivalent to an
attribution file and
a readme
• Not the same
Brownian Motion
problem
• More interesting
licenses Data Card
Model
Card
7. Questions for discussion
● EU AI Act highlights risk profiles
for categories of models
● NIST highlights artifacts are high
risk if there is lack of
transparency
● Discussion : What is a high-risk
artifact? Should we define
compliance based on risk type?
8. SPDX AI Profile
● The HOW
● https://docs.google.com/pres
entation/d/1tsE2pnd0VyyyPoA
GqiJd0EPJUbulesld58YdpEhcE
4Q/edit#slide=id.g2226e7852
20_0_409
9. What is open? What is transparency?
● LLMs move towards more
closed structures
● ChatGPT v2 had open
datasets, v3/4 more closed.
● What does transparency
mean?
11. Agenda 6th Feb 2024
OpenChain AI study
group
1) Recap of discussion so far
2) Scope – how to build trust
in the open source AI
supply chain
1) What are the “compliance artifacts”?
2) How do we know they can be trusted?
3) Discuss use cases
1) Inbound
2) Deployment internally
3) Hosting externally
4) Distributing externally
13. Goals for AI Study
Group
● Establishing industry-wide
agreement on AI management
● Developing AI principles for building
trust in supply chain
● Discussing AI ethics such as
transparency and bias
DISCUSSION – do you agree? What are
your thoughts?
From Jan 23, 2024 Meeting:
14. Achieving Our Goals
● Weekly meet up to discuss
progress
● Commitment for progress
DISCUSSION : what else do we need to
meet our goals?
From Jan 23, 2024 Meeting:
Editor's Notes
High-risk artifacts are digital assets that contain sensitive data or code, that if breached or misused, can pose a significant risk to an organization's security. Examples of high-risk artifacts include cryptographic keys, log files, and source code.
Openness refers to the concept of making data, resources, and content available and accessible to everyone. Open-source software, open data, and open hardware are examples of this concept.
The AI Study Group aims to establish industry-wide agreement on managing AI and building trust in the supply chain. We will discuss AI ethics, such as transparency and bias, and how to ensure companies are following AI governance guidance.
To achieve our goals, we need to collaborate with the OpenChain AI Study Group. We must establish an industry-wide agreement on AI management and develop AI principles that build trust in the supply chain. It is also important to discuss AI ethics, such as transparency and bias. By working together, we can ensure that our companies are following the guidance we provide, and we can make AI more trustworthy and reliable.