Building IA agents
in Java
Julien Dubois
@juliendubois
Principal Manager,
Java Developer Relations
IA acceleration
• 2 years ago, we learned how
to chat with an LLM
• 1 year ago, we learned RAG
• In June last year, appeared
Function Calling
• In August, Structured Outputs
• In November, MCP
• Last month, studio Ghibli
pictures
Introduction to
LangChain4j
- Java version of LangChain, a
JavaScript + Python
framework
- Facilitates the creation of
applications using Large
Language Models
- Open Source
The new OpenAI Java SDK is here!
• https://github.com/openai/openai-java is the new official OpenAI
Java SDK
• Supported by LangChain4j, in the module « openai-official »
• Also works with other models and cloud providers, as it’s the de
facto standard
• My personal recommendation over the 2 other Azure SDKs (Azure
OpenAI and Azure AI Inference), and I coded the LangChain4j
support for all 3 ☺
• This is what we’ll use in the next slides and demos
How to chat with an LLM
New: Passwordless authentication
Live demo
Chatting with OpenAI
What’s an agent?
A piece of software that can act autonomously
In our context, this means that the LLM is able to take decisions and
act:
- An event will trigger its execution
- It will call functions, depending of its needs
- It will act and execute tools
Structured Outputs
• You could already ask an LLM to answer you in JSON, but you
couldn’t enforce the schema
• JSON schema is a specification that allows to describe the structure
of a JSON payload
• This guarantees you’ll be able to map the LLM’s response to a Java object
• At the cost of some lower performance
• As it’s complicated to hand-craft a JSON schema, using a
framework like LangChain4j that creates it for you, is highly
recommended
Using Structured Outputs with LangChain4j
Live demo
Structured Outputs
Function Calling (or Tools Calling)
Allows an LLM to execute local code, using a JSON payload
• Warning !
• Not all models have this capability
• The LLM will decide if/when to call a function, and will pass its
parameters
• This allows an LLM to gather external information (similar to the
RAG pattern)
• And most importantly, to execute actions!
Function Calling with LangChain4j
Live demo
Function Calling
MCP = Model Context Protocol
• Standardized protocol that allows LLMs to call remote functions
• Huge success since the specification was released by Anthropic
• Official SDK developped by the Spring AI team
• Allows to easily create servers
• LangChain4j and Spring AI can both use it as clients
Configuring an MCP server
Live demo
MCP server
MCP on the client side
• MCP servers can be used by desktop clients
• Claude Desktop
• VS Code
• They can also be used by AI frameworks
• Supported by LangChain4j
• Very similar to using Function Calling
• Structured Outputs also work!
Using an MCP server from LangChain4j
Live demo
LangChain4j + MCP client
The future of MCP
• Filtering/selecting tools
• Orchestration
• Security and signatures of
servers
• « Damn Vulnerable MCP Server »
• https://github.com/harishsg993010/da
mn-vulnerable-MCP-server
• Cloud hosting
• Authentication
• Downloading servers through
MCP registries (à la Docker
Hub)
Ressources
Complete demo:
https://github.com/jdubois/jdubois-
langchain4j-demo
Azure CLI MCP server:
https://github.com/jdubois/azure-cli-mcp
Building AI agents with Java and LangChain4j

Building AI agents with Java and LangChain4j

  • 1.
    Building IA agents inJava Julien Dubois @juliendubois Principal Manager, Java Developer Relations
  • 2.
    IA acceleration • 2years ago, we learned how to chat with an LLM • 1 year ago, we learned RAG • In June last year, appeared Function Calling • In August, Structured Outputs • In November, MCP • Last month, studio Ghibli pictures
  • 3.
    Introduction to LangChain4j - Javaversion of LangChain, a JavaScript + Python framework - Facilitates the creation of applications using Large Language Models - Open Source
  • 4.
    The new OpenAIJava SDK is here! • https://github.com/openai/openai-java is the new official OpenAI Java SDK • Supported by LangChain4j, in the module « openai-official » • Also works with other models and cloud providers, as it’s the de facto standard • My personal recommendation over the 2 other Azure SDKs (Azure OpenAI and Azure AI Inference), and I coded the LangChain4j support for all 3 ☺ • This is what we’ll use in the next slides and demos
  • 5.
    How to chatwith an LLM
  • 6.
  • 7.
  • 8.
    What’s an agent? Apiece of software that can act autonomously In our context, this means that the LLM is able to take decisions and act: - An event will trigger its execution - It will call functions, depending of its needs - It will act and execute tools
  • 9.
    Structured Outputs • Youcould already ask an LLM to answer you in JSON, but you couldn’t enforce the schema • JSON schema is a specification that allows to describe the structure of a JSON payload • This guarantees you’ll be able to map the LLM’s response to a Java object • At the cost of some lower performance • As it’s complicated to hand-craft a JSON schema, using a framework like LangChain4j that creates it for you, is highly recommended
  • 10.
    Using Structured Outputswith LangChain4j
  • 11.
  • 12.
    Function Calling (orTools Calling) Allows an LLM to execute local code, using a JSON payload • Warning ! • Not all models have this capability • The LLM will decide if/when to call a function, and will pass its parameters • This allows an LLM to gather external information (similar to the RAG pattern) • And most importantly, to execute actions!
  • 13.
  • 14.
  • 15.
    MCP = ModelContext Protocol • Standardized protocol that allows LLMs to call remote functions • Huge success since the specification was released by Anthropic • Official SDK developped by the Spring AI team • Allows to easily create servers • LangChain4j and Spring AI can both use it as clients
  • 16.
  • 17.
  • 18.
    MCP on theclient side • MCP servers can be used by desktop clients • Claude Desktop • VS Code • They can also be used by AI frameworks • Supported by LangChain4j • Very similar to using Function Calling • Structured Outputs also work!
  • 19.
    Using an MCPserver from LangChain4j
  • 20.
  • 21.
    The future ofMCP • Filtering/selecting tools • Orchestration • Security and signatures of servers • « Damn Vulnerable MCP Server » • https://github.com/harishsg993010/da mn-vulnerable-MCP-server • Cloud hosting • Authentication • Downloading servers through MCP registries (à la Docker Hub)
  • 22.