LangChain and LangFlow
Gene Leybzon, Jim Steele
February 1, 2023
DISCLAIMER
§ The views and opinions expressed by the Presenter are those of the Presenter.
§ Presentation is not intended as legal or financial advice and may not be used as legal or
financial advice.
§ Every effort has been made to assure this information is up-to-date as of the date of
publication.
Agenda for today
1. LangChain Concept
2. Demo
3. LangFlow
4. Q&A
LangChain
Value Proposition:
LangChain is designed as a comprehensive toolkit for
developers working with large language models (LLMs).
It aims to facilitate the creation of applications that are
context-aware and capable of reasoning, thereby
enhancing the practical utility of LLMs in various
scenarios.
Purpose:
LangChain simplifies the transition from prototype to
production, offering a suite of tools for debugging, testing,
evaluation, and monitoring.
Parts of LangChain Framework
https://www.langchain.com/
•Libraries: Available in Python and JavaScript, these
libraries offer interfaces and integrations for various
components, a runtime for creating chains and agents,
and ready-made chain and agent implementations.
•Templates: This is a set of deployable reference
architectures for diverse tasks, facilitating ease of
deployment.
•LangServe: A specialized library for converting
LangChain chains into a REST API, enhancing
accessibility and integration.
•LangSmith: A comprehensive developer platform
designed for debugging, testing, evaluating, and
monitoring chains created with any LLM framework,
fully compatible with LangChain.
GenAI Application Development with LangChain
Develop
• Streamlined Prototyping:
Simplifies the process of
creating prototypes with large
language models.
• Context-Aware Systems:
Facilitates the building of
applications that understand
and utilize context effectively.
• Integration Support: Offers
tools for integrating various
data sources and components.
• Production Readiness: Provides
resources for debugging, testing,
evaluating, and monitoring
applications.
• Collaborative Development:
Encourages and supports
collaborative efforts in the
developer community.
• Diverse Applications: Suitable
for a wide range of
applications, from chatbots to
document analysis.
Turn into product
• Scalability: Provides tools to
scale applications from small
prototypes to larger,
production-level systems.
• Robust Testing: Offers robust
testing frameworks to ensure
application reliability.
• Monitoring Tools: Includes
monitoring capabilities to track
performance and user
interactions.
• Deployment Ease: Simplifies the
deployment process, making it
easier to launch applications.
• Continuous Improvement:
Supports ongoing development
and refinement of applications
post-launch.
Deploy
• LangServe: A library that allows
for the deployment of
LangChain chains as REST APIs,
making applications easily
accessible and integrable.
• Deployment Templates: Ready-
to-use reference architectures
that streamline the deployment
process for various tasks.
• Scalability Tools: Supports the
scaling of applications from
development to production level.
• Ease of Integration: Ensures
seamless integration with
existing systems and workflows.
• Production-Grade Support:
Offers features for ensuring
stability and performance in
production environments.
LangChain Components
LangChain components are designed
to enhance the development and
deployment of applications using large
language models. Components are
modular and easy-to-use, whether you
are using the rest of the LangChain
framework or not
https://python.langchain.com/docs/integrations
/components
LangChain Libraries
langchain-
core
• Base
abstractions
• LangChain
Expression
Language
langchain-
community
• Chat
Models
• Email Tools
• Database
integrations
langchain
• Chains
• Agents
• Data
Retrieval
Strategies
Base Abstractions
LangChain Base abstractions are
designed to simplify the process of
integrating and utilizing Large Language
Models (LLMs) in various applications.
These abstractions likely include
components for handling different
aspects of LLM integration, such as data
processing, model interaction, and
response generation. They are structured
to provide a foundation for building
complex LLM-based solutions,
streamlining development and allowing
for more efficient deployment. For a
detailed list and explanation of these
base abstractions, you would need to
refer to Langchain documentation or their
GitHub repository.
1.Languagemodels: This abstraction deals with the interaction with
language models. It includes the functionality to send prompts to a
language model and receive responses.
2.Chains: Chains are sequences of operations or transformations
applied to data. In Langchain, chains are used to process the input and
output of language models, allowing for complex workflows.
3.Apps: This abstraction is about building applications that use
language models. Apps combine different chains and models to create
an end-to-end application.
4.Actuators: Actuators are about taking action based on the output of
a language model. This could include sending an email, generating a
report, or any other action that results from the language model's
output.
5.World Models: These are abstractions for representing and
understanding the state of the world. They can be used to maintain
context or state across interactions with a language model.
6.Orchestrators: Orchestrators manage and coordinate the
interactions between different components of the system, such as the
language model, chains, actuators, and world models.
7.Components: These are smaller building blocks used within
chains. Components can be anything from a simple text processing
function to a complex neural network.
8.Data Sources: Abstractions for managing and accessing data that
the language model or other components might need. This could
include databases, APIs, or file systems.
LANGCHAIN DEMO
INSTALLATION AND CONFIGURATION
pip install langchain
pip install langchain-openai OPENAI_API_KEY
sudo vi /etc/launchd.conf
export OPENAI_API_KEY = “K
Install LangChain and OpenAI Model: Set up API KEY for OpenAI:
HELLO OPENAI LANGCHAIN
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
r = llm.invoke("how can langsmith help with testing?")
print(r)
PROMPT TEMPLATE
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
("system", "You are world class technical documentation writer."),
("user", "{input}")
])
chain = prompt | llm
r = chain.invoke({"input": "how can langsmith help with testing?"})
print(r)
OUTPUT PARSER
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
("system", "You are world class technical documentation writer."),
("user", "{input}")
])
output_parser = StrOutputParser()
chain = prompt | llm | output_parser
r = chain.invoke({"input": "how can langsmith help with testing?"})
print(r)
“RANDOM” AGENT
import sys
import random
from langchain_openai import ChatOpenAI
from langchain.agents import tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.tools.convert_to_openai import format_tool_to_openai_function
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.agents import AgentExecutor
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word)
@tool
def random_number() -> int:
"""Returns random number"""
return random.randint(0, sys.maxsize)
tools = [get_word_length, random_number]
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are very powerful assistant, but don't know current events",
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "How many letters in the word educator"})
agent_executor.invoke({"input": "Generate 10 random numbers"})
THE ROLE OF ART AND POETRY
# Import necessary modules and classes
import os
import json
import datetime
from langchain.agents import load_tools, initialize_agent, AgentType, ZeroShotAgent, Tool, AgentExecutor
from langchain_community.utilities import SerpAPIWrapper
from typing import List, Dict, Callable
from langchain.chains import ConversationChain
from langchain_openai import OpenAI, ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import AIMessage, HumanMessage, SystemMessage, BaseMessage
from jinja2 import Environment, FileSystemLoader
# Set environment variables for API keys and configurations
#os.environ['OPENAI_API_KEY'] = str("")
#os.environ["SERPAPI_API_KEY"] = str("")
os.environ["LANGCHAIN_TRACING_V2"]="false"
os.environ["LANGCHAIN_ENDPOINT"]="https://api.smith.langchain.com"
#os.environ["LANGCHAIN_API_KEY"]="" #https://smith.langchain.com/
os.environ["LANGCHAIN_PROJECT"]="pt-wooden-infix-62"
#Constants
#topic = "Ethics and the Good Life"
topic = "The Role of Art and Poetry"
word_limit = 300
names = {
"Plato": ["arxiv", "ddg-search", "wikipedia"],
"Aristotle": ["arxiv", "ddg-search", "wikipedia"],
}
max_dialogue_rounds = 8
#Language Model Initialization
llm = OpenAI(temperature=0, model_name='gpt-4-1106-preview’)
...
Plato Aristotle
Your name is {name}.
Your description is as follows: {description}
Your goal is to persuade your conversation partner of your point of view.
DO look up information with your tool to refute your partner's claims.
DO cite your sources.
DO NOT fabricate fake citations.
DO NOT cite any source that you did not look up.
Do not add anything else.
Stop speaking the moment you finish speaking from your perspective.
LANGFLOW
LANGFLOW – GUI FOR LANGCHAIN
QUESTIONS?
About Presenter
https://www.meetup.com/members/9074420/
https://www.linkedin.com/in/leybzon/

Generative AI Application Development using LangChain and LangFlow

  • 1.
    LangChain and LangFlow GeneLeybzon, Jim Steele February 1, 2023
  • 2.
    DISCLAIMER § The viewsand opinions expressed by the Presenter are those of the Presenter. § Presentation is not intended as legal or financial advice and may not be used as legal or financial advice. § Every effort has been made to assure this information is up-to-date as of the date of publication.
  • 3.
    Agenda for today 1.LangChain Concept 2. Demo 3. LangFlow 4. Q&A
  • 4.
    LangChain Value Proposition: LangChain isdesigned as a comprehensive toolkit for developers working with large language models (LLMs). It aims to facilitate the creation of applications that are context-aware and capable of reasoning, thereby enhancing the practical utility of LLMs in various scenarios. Purpose: LangChain simplifies the transition from prototype to production, offering a suite of tools for debugging, testing, evaluation, and monitoring.
  • 5.
    Parts of LangChainFramework https://www.langchain.com/ •Libraries: Available in Python and JavaScript, these libraries offer interfaces and integrations for various components, a runtime for creating chains and agents, and ready-made chain and agent implementations. •Templates: This is a set of deployable reference architectures for diverse tasks, facilitating ease of deployment. •LangServe: A specialized library for converting LangChain chains into a REST API, enhancing accessibility and integration. •LangSmith: A comprehensive developer platform designed for debugging, testing, evaluating, and monitoring chains created with any LLM framework, fully compatible with LangChain.
  • 6.
    GenAI Application Developmentwith LangChain Develop • Streamlined Prototyping: Simplifies the process of creating prototypes with large language models. • Context-Aware Systems: Facilitates the building of applications that understand and utilize context effectively. • Integration Support: Offers tools for integrating various data sources and components. • Production Readiness: Provides resources for debugging, testing, evaluating, and monitoring applications. • Collaborative Development: Encourages and supports collaborative efforts in the developer community. • Diverse Applications: Suitable for a wide range of applications, from chatbots to document analysis. Turn into product • Scalability: Provides tools to scale applications from small prototypes to larger, production-level systems. • Robust Testing: Offers robust testing frameworks to ensure application reliability. • Monitoring Tools: Includes monitoring capabilities to track performance and user interactions. • Deployment Ease: Simplifies the deployment process, making it easier to launch applications. • Continuous Improvement: Supports ongoing development and refinement of applications post-launch. Deploy • LangServe: A library that allows for the deployment of LangChain chains as REST APIs, making applications easily accessible and integrable. • Deployment Templates: Ready- to-use reference architectures that streamline the deployment process for various tasks. • Scalability Tools: Supports the scaling of applications from development to production level. • Ease of Integration: Ensures seamless integration with existing systems and workflows. • Production-Grade Support: Offers features for ensuring stability and performance in production environments.
  • 7.
    LangChain Components LangChain componentsare designed to enhance the development and deployment of applications using large language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not https://python.langchain.com/docs/integrations /components
  • 8.
    LangChain Libraries langchain- core • Base abstractions •LangChain Expression Language langchain- community • Chat Models • Email Tools • Database integrations langchain • Chains • Agents • Data Retrieval Strategies
  • 9.
    Base Abstractions LangChain Baseabstractions are designed to simplify the process of integrating and utilizing Large Language Models (LLMs) in various applications. These abstractions likely include components for handling different aspects of LLM integration, such as data processing, model interaction, and response generation. They are structured to provide a foundation for building complex LLM-based solutions, streamlining development and allowing for more efficient deployment. For a detailed list and explanation of these base abstractions, you would need to refer to Langchain documentation or their GitHub repository. 1.Languagemodels: This abstraction deals with the interaction with language models. It includes the functionality to send prompts to a language model and receive responses. 2.Chains: Chains are sequences of operations or transformations applied to data. In Langchain, chains are used to process the input and output of language models, allowing for complex workflows. 3.Apps: This abstraction is about building applications that use language models. Apps combine different chains and models to create an end-to-end application. 4.Actuators: Actuators are about taking action based on the output of a language model. This could include sending an email, generating a report, or any other action that results from the language model's output. 5.World Models: These are abstractions for representing and understanding the state of the world. They can be used to maintain context or state across interactions with a language model. 6.Orchestrators: Orchestrators manage and coordinate the interactions between different components of the system, such as the language model, chains, actuators, and world models. 7.Components: These are smaller building blocks used within chains. Components can be anything from a simple text processing function to a complex neural network. 8.Data Sources: Abstractions for managing and accessing data that the language model or other components might need. This could include databases, APIs, or file systems.
  • 10.
  • 11.
    INSTALLATION AND CONFIGURATION pipinstall langchain pip install langchain-openai OPENAI_API_KEY sudo vi /etc/launchd.conf export OPENAI_API_KEY = “K Install LangChain and OpenAI Model: Set up API KEY for OpenAI:
  • 12.
    HELLO OPENAI LANGCHAIN fromlangchain_openai import ChatOpenAI llm = ChatOpenAI() r = llm.invoke("how can langsmith help with testing?") print(r)
  • 13.
    PROMPT TEMPLATE from langchain_openaiimport ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser llm = ChatOpenAI() prompt = ChatPromptTemplate.from_messages([ ("system", "You are world class technical documentation writer."), ("user", "{input}") ]) chain = prompt | llm r = chain.invoke({"input": "how can langsmith help with testing?"}) print(r)
  • 14.
    OUTPUT PARSER from langchain_openaiimport ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser llm = ChatOpenAI() prompt = ChatPromptTemplate.from_messages([ ("system", "You are world class technical documentation writer."), ("user", "{input}") ]) output_parser = StrOutputParser() chain = prompt | llm | output_parser r = chain.invoke({"input": "how can langsmith help with testing?"}) print(r)
  • 15.
    “RANDOM” AGENT import sys importrandom from langchain_openai import ChatOpenAI from langchain.agents import tool from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_community.tools.convert_to_openai import format_tool_to_openai_function from langchain.agents.format_scratchpad import format_to_openai_function_messages from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser from langchain.agents import AgentExecutor @tool def get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word) @tool def random_number() -> int: """Returns random number""" return random.randint(0, sys.maxsize) tools = [get_word_length, random_number] llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0) prompt = ChatPromptTemplate.from_messages( [ ( "system", "You are very powerful assistant, but don't know current events", ), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"), ] ) llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools]) agent = ( { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_function_messages( x["intermediate_steps"] ), } | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser() ) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) agent_executor.invoke({"input": "How many letters in the word educator"}) agent_executor.invoke({"input": "Generate 10 random numbers"})
  • 16.
    THE ROLE OFART AND POETRY # Import necessary modules and classes import os import json import datetime from langchain.agents import load_tools, initialize_agent, AgentType, ZeroShotAgent, Tool, AgentExecutor from langchain_community.utilities import SerpAPIWrapper from typing import List, Dict, Callable from langchain.chains import ConversationChain from langchain_openai import OpenAI, ChatOpenAI from langchain.memory import ConversationBufferMemory from langchain.prompts.prompt import PromptTemplate from langchain.schema import AIMessage, HumanMessage, SystemMessage, BaseMessage from jinja2 import Environment, FileSystemLoader # Set environment variables for API keys and configurations #os.environ['OPENAI_API_KEY'] = str("") #os.environ["SERPAPI_API_KEY"] = str("") os.environ["LANGCHAIN_TRACING_V2"]="false" os.environ["LANGCHAIN_ENDPOINT"]="https://api.smith.langchain.com" #os.environ["LANGCHAIN_API_KEY"]="" #https://smith.langchain.com/ os.environ["LANGCHAIN_PROJECT"]="pt-wooden-infix-62" #Constants #topic = "Ethics and the Good Life" topic = "The Role of Art and Poetry" word_limit = 300 names = { "Plato": ["arxiv", "ddg-search", "wikipedia"], "Aristotle": ["arxiv", "ddg-search", "wikipedia"], } max_dialogue_rounds = 8 #Language Model Initialization llm = OpenAI(temperature=0, model_name='gpt-4-1106-preview’) ... Plato Aristotle Your name is {name}. Your description is as follows: {description} Your goal is to persuade your conversation partner of your point of view. DO look up information with your tool to refute your partner's claims. DO cite your sources. DO NOT fabricate fake citations. DO NOT cite any source that you did not look up. Do not add anything else. Stop speaking the moment you finish speaking from your perspective.
  • 17.
  • 18.
    LANGFLOW – GUIFOR LANGCHAIN
  • 19.
  • 20.