ENTERPRISE KNOWLEDGE
Tracing the Thread: Decoding the
Decision-Making Process with
GraphRAG
A Case Study
Urmi Majumder and Kaleb Schultz
Enterprise Search & Discovery 2025
ENTERPRISE KNOWLEDGE
Topics Covered
⬢ What is Retrieval Augmented Generation (RAG)?
⬢ How GraphRAG Differs.
⬢ How can you use GraphRAG for Transparent and Explainable AI?
⬢ How can an organization use GraphRAG for Cross-Document,
Multi-Step Question and Answering?
ENTERPRISE KNOWLEDGE
⬢ 15+ years of experience in enterprise system architecture,
design, implementation and operations
⬢ Leads the development of technical solutions in support of
wide variety of knowledge and data management solutions
⬢ Principal architect in knowledge graphs, enterprise AI, and
scalable data management systems
⬢ Ph.D in Computer Science, Duke University
Urmi Majumder
Principal Data Architecture Consultant
Kaleb Schultz
Senior Technical Analyst
⬢ 5+ years of experience in data science, automation, and process
improvement.
⬢ Experience leading and supporting graph and data engineering
efforts across multiple projects.
⬢ BA, Political Science, James Madison University
10AREAS OF EXPERTISE
KM STRATEGY & DESIGN TAXONOMY & ONTOLOGY DESIGN
TECHNOLOGY SOLUTIONS AGILE, DESIGN THINKING, & FACILITATION
CONTENT & BRAND STRATEGY KNOWLEDGE GRAPHS, DATA MODELING, & AI
ENTERPRISE SEARCH INTEGRATED CHANGE MANAGEMENT
ENTERPRISE LEARNING CONTENT MANAGEMENT
80+ EXPERT
CONSULTANTS
HEADQUARTERED IN WASHINGTON, DC,
USA
ESTABLISHED 2013 – OUR FOUNDERS AND PRINCIPALS HAVE BEEN PROVIDING
KNOWLEDGE MANAGEMENT CONSULTING TO GLOBAL CLIENTS FOR OVER 20 YEARS.
KMWORLD’S
100 COMPANIES THAT MATTER IN KM (2015, 2016, 2017, 2018,
2019, 2020, 2021, 2022)
TOP 50 TRAILBLAZERS IN AI (2020, 2021, 2022)
CIO REVIEW’S
20 MOST PROMISING KM SOLUTION PROVIDERS (2016)
INC MAGAZINE
#2,343 OF THE 5000 FASTEST GROWING COMPANIES (2021)
#2,574 OF THE 5000 FASTEST GROWING COMPANIES (2020)
#2,411 OF THE 5000 FASTEST GROWING COMPANIES (2019)
#1,289 OF THE 5000 FASTEST GROWING COMPANIES (2018)
INC MAGAZINE
BEST WORKPLACES (2018, 2019, 2021, 2022)
WASHINGTONIAN MAGAZINE’S
TOP 50 GREAT PLACES TO WORK (2017)
WASHINGTON BUSINESS JOURNAL’S
BEST PLACES TO WORK (2017, 2018, 2019, 2020)
ARLINGTON ECONOMIC DEVELOPMENT’S
FAST FOUR AWARD – FASTEST GROWING COMPANY (2016)
VIRGINIA CHAMBER OF COMMERCE’S
FANTASTIC 50 AWARD – FASTEST GROWING COMPANY (2019,
2020)
AWARD-WINNING
CONSULTANCY
PRESENCE IN BRUSSELS, BELGIUM
EK At A Glance
STABLE CLIENT
BASE
ENTERPRISE KNOWLEDGE
ENTERPRISE KNOWLEDGE
Retrieval Augmented
Generation (RAG)
ENTERPRISE KNOWLEDGE
RAGs and
LLMs
Using RAG, an LLM can access information beyond the original training set used.
While this can produce more accurate answers, foundational models alone cannot
reliably interpret enterprise data because they do not know domain context.
Content
Sources and
RAGs
RAGs can pull information from multiple sources (databases, search engines, APIs, etc.).
RAGs can summarize multiple pieces of content into a single source of truth.
Relationships between content could be implemented in a Knowledge Graph and
leveraged by a RAG for relevance scoring
Retrieval Augmented Generation
Retrieval Augmented
Generation (RAG) is a process
that augments the input
supplied to an LLM with
relevant information from an
organization’s knowledge
domain.
ENTERPRISE KNOWLEDGE
Example RAG Call:
LLM without RAG
Organizational
knowledge is
brought in from a
data source.
“How many
pharmaceutical
clients does EK
have?”
The RAG model uses embeddings
(vector representations of text) to
identify the most contextually
similar documents in the
knowledge base to the prompt,
passing them to the LLM.
User Input
Knowledge
Base
RAG Model
LLM refers to the
data it was passed
from the knowledge
base. If the data is
up-to-date, precise,
and interpreted
correctly, the answer
should be right.
LLM produces an
output that should
be right.
LLM
LLM Output
Right answer
based on
public
information!
“Who is the CEO
of Enterprise
Knowledge?”
“Enterprise
Knowledge’s
CEO, Zach
Wahl, will be
speaking…”
“The CEO of
Enterprise
Knowledge is Zach
Wahl.”
User Input
Without retrieval, the LLM never sees
domain context and must guess from
pre-training only.
LLM with RAG
ENTERPRISE KNOWLEDGE
Problem Statement
At this philanthropic organization, program officers and
leadership rely on a variety of documentation, ranging from
investment reports to memos, across multiple repositories and
formats to make impactful business decisions.
They need to have a way to:
▪ Identify information related to specific investments, their
outcomes, and changes over time across documents and
resources
So that they can…
Quickly assess current investment progress and status
Avoid “manual scavenger” hunts for supporting files
Ensure decisions are based on the most current information
ENTERPRISE KNOWLEDGE
How Common Tools Solve the Problem
A lot of traditional RAG-style tools would address
this by retrieving chunks of text related to the
user’s question based on surface level matching,
including:
⬢ Using vector similarity to pull passages that
sound similar to the query.
⬢ Identifying sentences that mention key
investment terms (project or grant names,
country, partner, etc.)
These “naive-RAG” solutions don’t explain why certain passages were returned
and lack a deeper semantic understanding of how information connects across
files. Because of this, traditional RAG cannot trace its decision path or reconcile
information about the same investment that is scattered across multiple
documents, resulting in untraceable and incorrect answers.
ENTERPRISE KNOWLEDGE
The Solution
GraphRAG based Chatbot
ORIGINAL STATE THE NEED SOLUTION
● Investment information lives
across reports, memos,
spreadsheets, emails, and
disparate repos with
inconsistent terminology.
● Manual reviews to answer
questions (progress, status,
changes) are slow and error
prone.
● No single current view of the
investment lifecycle and
decisions often rely on the most
visible documents.
● Natural-language access to find
relevant passages across many
files quickly.
● Treat information as an
interconnected network of
relationships, not isolated text
fragments.
● Provide traceable, auditable
reasoning paths showing how
the answer was assembled.
● Reduce time to insight for
program officers and leadership.
● Construct a standards-based
graph / ontology representing
relevant concepts and
relationships.
● Link structured and
unstructured data to unify
terminology via the graph.
● Use the graph to drive retrieval:
semantically-anchored prompt
assembly into LLM-generated
answers.
● Provide answers with
provenance.
ENTERPRISE KNOWLEDGE
Transparent & Explainable AI
with GraphRAG
ENTERPRISE KNOWLEDGE
What is GraphRAG
GraphRAG extends traditional RAG by grounding retrieval and reasoning in a semantic
knowledge graph, allowing the model to understand how information is connected, not
just textually similar.
Retrieves relationships, entities, and
facts anchored to a shared ontology.
Understands how chunks relate
across documents.
Produces traceable reasoning paths
with provenance.
Built for multi-step, cross-document
questions.
Transparent and auditable answer
assembly tied to defined graph
structure.
GraphRAG
Retrieves information based on vector
similarity.
Treats each chunk independently.
Hard to explain why a passage was
returned.
Good for quick semantic search.
Black-box behavior when answers are
assembled in the prompt.
Traditional RAG
ENTERPRISE KNOWLEDGE
When to use a GraphRAG Solution?
Precision Intuition
Adaptability Composition
Delivers highly accurate
results by leveraging
structured relationships in
graph databases to resolve
ambiguities.
Enhances decision-making by
identifying patterns and
relationships that mimic
human reasoning and tool use.
Dynamically adjusts to new
information by integrating
updates via the graph and
reconfiguring minimal
components.
Synthesizes insights by
combining data from multiple
sources into a unified,
context-aware response.
Explanation
Parallel Retrieval
Provides transparent,
interpretable reasoning by
grounding responses in
explicit semantic links and
definitions.
Optimizes efficiency through
distributed query handling
across agents working on
interrelated tasks.
ENTERPRISE KNOWLEDGE
What makes up a GraphRAG solution?
GraphRAG solutions are comprised of:
Ontology
Defines the enterprise
entities and
relationships that the
system uses to
contextualize retrieved
knowledge.
Knowledge Graph
Database
A data store that
consists of nodes and
their relationships,
corresponding to
real-world entities and
interactions.
Retrieval and Ranking
Layer
Finds the relevant
graph nodes, paths,
and documents and
ranks them before
passing context to the
model.
Person
Address
Name
Phone
Email
Claim ID SSN Relative
Bank
LLM & Orchestration
Layer
Combines retrieved
graph context with the
user query and
produces a grounded
answer that can be
traced back to the
source.
ENTERPRISE KNOWLEDGE
Explainability via Semantics & Provenance
Ontology provides
structured
meaning which
delivers precision
and removes
ambiguity when
responding to
enterprise
questions.
Graph database
enables traceable
traversals so
responses are built
from specific nodes
and hops rather
than embedding
guesses.
Provenance
enriches each
retrieval result so
context contains
source, version,
timestamp, and
text snippet for
auditability.
Ranking ties
reasoning to the
most relevant
relationships
making
composition
possible across
many distributed
sources.
The orchestration
layer produces an
answer that can
be traced so every
claim is tied back to
specific hops and
sources for
explainability.
ENTERPRISE KNOWLEDGE
1
Structured (Data) and unstructured
(Document) information.
2
Pipelines to extract structured data and
enriched document metadata into the graph.
3
Graph backed knowledge aggregation layer,
making data more accessible and meaningful.
4
Secure API endpoints for Large Language
Models and associated resources.
5
Application to interpret user queries, surface
relevant context, and ground LLM output.
6
Possible interfaces include: a novel frontend app
or APIs utilized by internal agents.
High Level Data Flow for a GraphRAG Solution
Static Components (Maintained Consistently) Dynamic Components (Triggered at Query)
Graph
RAG
Ontology
+
Graph
ENTERPRISE KNOWLEDGE
Case Study: Cross-Document,
Multi-Step Q&A with GraphRAG
ENTERPRISE KNOWLEDGE
A global philanthropic organization
needs to evaluate the progress and
performance of investments.
However, information about those
investments lives in disparate and
segmented systems (CMS’s,
communication applications,
unstructured documents, etc.).
This makes it difficult to answer even
basic questions about status, outcomes,
risks, and alignment because there is no
unified view across sources with a
static terminology.
The Challenge
ENTERPRISE KNOWLEDGE
GraphRAG for Multi-Step Q&A
User submits
natural
language
query
Ontology
atomizes the
query into
graph-aligned
hops
Each traversal retrieves its
scoped graph context and
linked evidence
LLM
processes
with enriched
context from
the graph
System
delivers
grounded
final answer
ENTERPRISE KNOWLEDGE
O&E FRAMEWORK
GRAPH USE for AI
Ontology
Semantic Context
Custom Graph
Construction
Pipeline
Tabular data
EK-Generated Gold
Standard Data
GraphDB
Graph
End User
Application
Ontology-Leveraged NER
Graph-Based Semantic
Resolution
GRAPH CONSTRUCTION
User Query Is
processed
using ontology,
graph, and
documents
SME Validation
of AI Output
Unstructured data
In-Memory Document
Retrieval
GraphRAG Architecture
ENTERPRISE KNOWLEDGE
● Disparate sources with structured,
unstructured, and multiple versions,
made answers inconsistent.
● LLMs lacked domain context and
guessed from pre-training.
● No clear provenance to show
where facts came from.
● Slow to compile project-level
insights.
● Answers varied from person and
were hard to verify.
THE
AFTER
Business Transformation
THE
BEFORE
● Unified retrieval instead of manual
hunting.
● Domain-aware interpretation of
queries.
● Source-traceable evidence for
every claim.
● Near-instant synthesis of
project-level insights.
● Consistent answers based on
shared meaning.
UNLOCKED
QUESTIONS
● What investments are part of
Project ABC?
● Is there a variance of more than
10% for Investment XYZ?
● What are the goals of Investment
XYZ?
● How has the scope changed of
Investment XYZ?
● Is Investment XYZ meeting its
goals?
ENTERPRISE KNOWLEDGE
Closing Activity Reflections:
● Are you solving any GraphRAG
problems in your organization?
Where are you in your GraphRAG
journey?
● Will you be able to leverage some
of the approaches we discussed to
solve your problem?
● What are some blockers/concerns
that are top of mind for you?

Tracing the Thread: Decoding the Decision-Making Process with GraphRAG

  • 1.
    ENTERPRISE KNOWLEDGE Tracing theThread: Decoding the Decision-Making Process with GraphRAG A Case Study Urmi Majumder and Kaleb Schultz Enterprise Search & Discovery 2025
  • 2.
    ENTERPRISE KNOWLEDGE Topics Covered ⬢What is Retrieval Augmented Generation (RAG)? ⬢ How GraphRAG Differs. ⬢ How can you use GraphRAG for Transparent and Explainable AI? ⬢ How can an organization use GraphRAG for Cross-Document, Multi-Step Question and Answering?
  • 3.
    ENTERPRISE KNOWLEDGE ⬢ 15+years of experience in enterprise system architecture, design, implementation and operations ⬢ Leads the development of technical solutions in support of wide variety of knowledge and data management solutions ⬢ Principal architect in knowledge graphs, enterprise AI, and scalable data management systems ⬢ Ph.D in Computer Science, Duke University Urmi Majumder Principal Data Architecture Consultant Kaleb Schultz Senior Technical Analyst ⬢ 5+ years of experience in data science, automation, and process improvement. ⬢ Experience leading and supporting graph and data engineering efforts across multiple projects. ⬢ BA, Political Science, James Madison University
  • 4.
    10AREAS OF EXPERTISE KMSTRATEGY & DESIGN TAXONOMY & ONTOLOGY DESIGN TECHNOLOGY SOLUTIONS AGILE, DESIGN THINKING, & FACILITATION CONTENT & BRAND STRATEGY KNOWLEDGE GRAPHS, DATA MODELING, & AI ENTERPRISE SEARCH INTEGRATED CHANGE MANAGEMENT ENTERPRISE LEARNING CONTENT MANAGEMENT 80+ EXPERT CONSULTANTS HEADQUARTERED IN WASHINGTON, DC, USA ESTABLISHED 2013 – OUR FOUNDERS AND PRINCIPALS HAVE BEEN PROVIDING KNOWLEDGE MANAGEMENT CONSULTING TO GLOBAL CLIENTS FOR OVER 20 YEARS. KMWORLD’S 100 COMPANIES THAT MATTER IN KM (2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022) TOP 50 TRAILBLAZERS IN AI (2020, 2021, 2022) CIO REVIEW’S 20 MOST PROMISING KM SOLUTION PROVIDERS (2016) INC MAGAZINE #2,343 OF THE 5000 FASTEST GROWING COMPANIES (2021) #2,574 OF THE 5000 FASTEST GROWING COMPANIES (2020) #2,411 OF THE 5000 FASTEST GROWING COMPANIES (2019) #1,289 OF THE 5000 FASTEST GROWING COMPANIES (2018) INC MAGAZINE BEST WORKPLACES (2018, 2019, 2021, 2022) WASHINGTONIAN MAGAZINE’S TOP 50 GREAT PLACES TO WORK (2017) WASHINGTON BUSINESS JOURNAL’S BEST PLACES TO WORK (2017, 2018, 2019, 2020) ARLINGTON ECONOMIC DEVELOPMENT’S FAST FOUR AWARD – FASTEST GROWING COMPANY (2016) VIRGINIA CHAMBER OF COMMERCE’S FANTASTIC 50 AWARD – FASTEST GROWING COMPANY (2019, 2020) AWARD-WINNING CONSULTANCY PRESENCE IN BRUSSELS, BELGIUM EK At A Glance STABLE CLIENT BASE ENTERPRISE KNOWLEDGE
  • 5.
  • 6.
    ENTERPRISE KNOWLEDGE RAGs and LLMs UsingRAG, an LLM can access information beyond the original training set used. While this can produce more accurate answers, foundational models alone cannot reliably interpret enterprise data because they do not know domain context. Content Sources and RAGs RAGs can pull information from multiple sources (databases, search engines, APIs, etc.). RAGs can summarize multiple pieces of content into a single source of truth. Relationships between content could be implemented in a Knowledge Graph and leveraged by a RAG for relevance scoring Retrieval Augmented Generation Retrieval Augmented Generation (RAG) is a process that augments the input supplied to an LLM with relevant information from an organization’s knowledge domain.
  • 7.
    ENTERPRISE KNOWLEDGE Example RAGCall: LLM without RAG Organizational knowledge is brought in from a data source. “How many pharmaceutical clients does EK have?” The RAG model uses embeddings (vector representations of text) to identify the most contextually similar documents in the knowledge base to the prompt, passing them to the LLM. User Input Knowledge Base RAG Model LLM refers to the data it was passed from the knowledge base. If the data is up-to-date, precise, and interpreted correctly, the answer should be right. LLM produces an output that should be right. LLM LLM Output Right answer based on public information! “Who is the CEO of Enterprise Knowledge?” “Enterprise Knowledge’s CEO, Zach Wahl, will be speaking…” “The CEO of Enterprise Knowledge is Zach Wahl.” User Input Without retrieval, the LLM never sees domain context and must guess from pre-training only. LLM with RAG
  • 8.
    ENTERPRISE KNOWLEDGE Problem Statement Atthis philanthropic organization, program officers and leadership rely on a variety of documentation, ranging from investment reports to memos, across multiple repositories and formats to make impactful business decisions. They need to have a way to: ▪ Identify information related to specific investments, their outcomes, and changes over time across documents and resources So that they can… Quickly assess current investment progress and status Avoid “manual scavenger” hunts for supporting files Ensure decisions are based on the most current information
  • 9.
    ENTERPRISE KNOWLEDGE How CommonTools Solve the Problem A lot of traditional RAG-style tools would address this by retrieving chunks of text related to the user’s question based on surface level matching, including: ⬢ Using vector similarity to pull passages that sound similar to the query. ⬢ Identifying sentences that mention key investment terms (project or grant names, country, partner, etc.) These “naive-RAG” solutions don’t explain why certain passages were returned and lack a deeper semantic understanding of how information connects across files. Because of this, traditional RAG cannot trace its decision path or reconcile information about the same investment that is scattered across multiple documents, resulting in untraceable and incorrect answers.
  • 10.
    ENTERPRISE KNOWLEDGE The Solution GraphRAGbased Chatbot ORIGINAL STATE THE NEED SOLUTION ● Investment information lives across reports, memos, spreadsheets, emails, and disparate repos with inconsistent terminology. ● Manual reviews to answer questions (progress, status, changes) are slow and error prone. ● No single current view of the investment lifecycle and decisions often rely on the most visible documents. ● Natural-language access to find relevant passages across many files quickly. ● Treat information as an interconnected network of relationships, not isolated text fragments. ● Provide traceable, auditable reasoning paths showing how the answer was assembled. ● Reduce time to insight for program officers and leadership. ● Construct a standards-based graph / ontology representing relevant concepts and relationships. ● Link structured and unstructured data to unify terminology via the graph. ● Use the graph to drive retrieval: semantically-anchored prompt assembly into LLM-generated answers. ● Provide answers with provenance.
  • 11.
    ENTERPRISE KNOWLEDGE Transparent &Explainable AI with GraphRAG
  • 12.
    ENTERPRISE KNOWLEDGE What isGraphRAG GraphRAG extends traditional RAG by grounding retrieval and reasoning in a semantic knowledge graph, allowing the model to understand how information is connected, not just textually similar. Retrieves relationships, entities, and facts anchored to a shared ontology. Understands how chunks relate across documents. Produces traceable reasoning paths with provenance. Built for multi-step, cross-document questions. Transparent and auditable answer assembly tied to defined graph structure. GraphRAG Retrieves information based on vector similarity. Treats each chunk independently. Hard to explain why a passage was returned. Good for quick semantic search. Black-box behavior when answers are assembled in the prompt. Traditional RAG
  • 13.
    ENTERPRISE KNOWLEDGE When touse a GraphRAG Solution? Precision Intuition Adaptability Composition Delivers highly accurate results by leveraging structured relationships in graph databases to resolve ambiguities. Enhances decision-making by identifying patterns and relationships that mimic human reasoning and tool use. Dynamically adjusts to new information by integrating updates via the graph and reconfiguring minimal components. Synthesizes insights by combining data from multiple sources into a unified, context-aware response. Explanation Parallel Retrieval Provides transparent, interpretable reasoning by grounding responses in explicit semantic links and definitions. Optimizes efficiency through distributed query handling across agents working on interrelated tasks.
  • 14.
    ENTERPRISE KNOWLEDGE What makesup a GraphRAG solution? GraphRAG solutions are comprised of: Ontology Defines the enterprise entities and relationships that the system uses to contextualize retrieved knowledge. Knowledge Graph Database A data store that consists of nodes and their relationships, corresponding to real-world entities and interactions. Retrieval and Ranking Layer Finds the relevant graph nodes, paths, and documents and ranks them before passing context to the model. Person Address Name Phone Email Claim ID SSN Relative Bank LLM & Orchestration Layer Combines retrieved graph context with the user query and produces a grounded answer that can be traced back to the source.
  • 15.
    ENTERPRISE KNOWLEDGE Explainability viaSemantics & Provenance Ontology provides structured meaning which delivers precision and removes ambiguity when responding to enterprise questions. Graph database enables traceable traversals so responses are built from specific nodes and hops rather than embedding guesses. Provenance enriches each retrieval result so context contains source, version, timestamp, and text snippet for auditability. Ranking ties reasoning to the most relevant relationships making composition possible across many distributed sources. The orchestration layer produces an answer that can be traced so every claim is tied back to specific hops and sources for explainability.
  • 16.
    ENTERPRISE KNOWLEDGE 1 Structured (Data)and unstructured (Document) information. 2 Pipelines to extract structured data and enriched document metadata into the graph. 3 Graph backed knowledge aggregation layer, making data more accessible and meaningful. 4 Secure API endpoints for Large Language Models and associated resources. 5 Application to interpret user queries, surface relevant context, and ground LLM output. 6 Possible interfaces include: a novel frontend app or APIs utilized by internal agents. High Level Data Flow for a GraphRAG Solution Static Components (Maintained Consistently) Dynamic Components (Triggered at Query) Graph RAG Ontology + Graph
  • 17.
    ENTERPRISE KNOWLEDGE Case Study:Cross-Document, Multi-Step Q&A with GraphRAG
  • 18.
    ENTERPRISE KNOWLEDGE A globalphilanthropic organization needs to evaluate the progress and performance of investments. However, information about those investments lives in disparate and segmented systems (CMS’s, communication applications, unstructured documents, etc.). This makes it difficult to answer even basic questions about status, outcomes, risks, and alignment because there is no unified view across sources with a static terminology. The Challenge
  • 19.
    ENTERPRISE KNOWLEDGE GraphRAG forMulti-Step Q&A User submits natural language query Ontology atomizes the query into graph-aligned hops Each traversal retrieves its scoped graph context and linked evidence LLM processes with enriched context from the graph System delivers grounded final answer
  • 20.
    ENTERPRISE KNOWLEDGE O&E FRAMEWORK GRAPHUSE for AI Ontology Semantic Context Custom Graph Construction Pipeline Tabular data EK-Generated Gold Standard Data GraphDB Graph End User Application Ontology-Leveraged NER Graph-Based Semantic Resolution GRAPH CONSTRUCTION User Query Is processed using ontology, graph, and documents SME Validation of AI Output Unstructured data In-Memory Document Retrieval GraphRAG Architecture
  • 21.
    ENTERPRISE KNOWLEDGE ● Disparatesources with structured, unstructured, and multiple versions, made answers inconsistent. ● LLMs lacked domain context and guessed from pre-training. ● No clear provenance to show where facts came from. ● Slow to compile project-level insights. ● Answers varied from person and were hard to verify. THE AFTER Business Transformation THE BEFORE ● Unified retrieval instead of manual hunting. ● Domain-aware interpretation of queries. ● Source-traceable evidence for every claim. ● Near-instant synthesis of project-level insights. ● Consistent answers based on shared meaning. UNLOCKED QUESTIONS ● What investments are part of Project ABC? ● Is there a variance of more than 10% for Investment XYZ? ● What are the goals of Investment XYZ? ● How has the scope changed of Investment XYZ? ● Is Investment XYZ meeting its goals?
  • 22.
    ENTERPRISE KNOWLEDGE Closing ActivityReflections: ● Are you solving any GraphRAG problems in your organization? Where are you in your GraphRAG journey? ● Will you be able to leverage some of the approaches we discussed to solve your problem? ● What are some blockers/concerns that are top of mind for you?