This details how to build an AI agent in Java using LangChain4j :
- What are structured outputs
- Using Function Calling
- Creating an MCP Server
- Consuming an MCP server
Presented at the Seattle Java User Group on May, 23rd 2025
Building IA agents
inJava
Julien Dubois
@juliendubois
Principal Manager,
Java Developer Relations
2.
IA acceleration
• 2years ago, we learned how
to chat with an LLM
• 1 year ago, we learned RAG
• In June last year, appeared
Function Calling
• In August, Structured Outputs
• In November, MCP
• Last month, studio Ghibli
pictures
3.
Introduction to
LangChain4j
- Javaversion of LangChain, a
JavaScript + Python
framework
- Facilitates the creation of
applications using Large
Language Models
- Open Source
4.
The new OpenAIJava SDK is here!
• https://github.com/openai/openai-java is the new official OpenAI
Java SDK
• Supported by LangChain4j, in the module « openai-official »
• Also works with other models and cloud providers, as it’s the de
facto standard
• My personal recommendation over the 2 other Azure SDKs (Azure
OpenAI and Azure AI Inference), and I coded the LangChain4j
support for all 3 ☺
• This is what we’ll use in the next slides and demos
What’s an agent?
Apiece of software that can act autonomously
In our context, this means that the LLM is able to take decisions and
act:
- An event will trigger its execution
- It will call functions, depending of its needs
- It will act and execute tools
9.
Structured Outputs
• Youcould already ask an LLM to answer you in JSON, but you
couldn’t enforce the schema
• JSON schema is a specification that allows to describe the structure
of a JSON payload
• This guarantees you’ll be able to map the LLM’s response to a Java object
• At the cost of some lower performance
• As it’s complicated to hand-craft a JSON schema, using a
framework like LangChain4j that creates it for you, is highly
recommended
Function Calling (orTools Calling)
Allows an LLM to execute local code, using a JSON payload
• Warning !
• Not all models have this capability
• The LLM will decide if/when to call a function, and will pass its
parameters
• This allows an LLM to gather external information (similar to the
RAG pattern)
• And most importantly, to execute actions!
MCP = ModelContext Protocol
• Standardized protocol that allows LLMs to call remote functions
• Huge success since the specification was released by Anthropic
• Official SDK developped by the Spring AI team
• Allows to easily create servers
• LangChain4j and Spring AI can both use it as clients
MCP on theclient side
• MCP servers can be used by desktop clients
• Claude Desktop
• VS Code
• They can also be used by AI frameworks
• Supported by LangChain4j
• Very similar to using Function Calling
• Structured Outputs also work!