Series: Using AI / ChatGPT at Work - GPT Automation
Are you a small business owner or web developer interested in leveraging the power of GPT (Generative Pretrained Transformer) technology to enhance your business processes? If so, Join us for a series of events focused on using GPT in business. Whether you're a small business owner or a web developer, you'll learn how to leverage GPT to improve your workflow and provide better services to your customers.
GPT Automation: What it is and How it Works
How Time-Saving GPT Automation Can Improve Your Business
Cost-Effective GPT Automation: How it Can Save Your Business Money
Using GPT Automation for Customer Service: Benefits and Best Practices
The Power of GPT Automation for Content Creation
Data Analysis Made Easy with GPT Automation
Top GPT-3 Automation Tools for Businesses
The Ethical Considerations of GPT Automation
Overcoming Bias in GPT Automation: Best Practices
The Future of GPT Automation: Trends and Predictions
Since we focus on "no code" here, we'll explore the tools that are already out there such as ChatGPT plugins for Chrome, OpenAI GPT API, low-code/no-code platforms like Make/Integromat and Zapier, existing apps like Jasper/Rytr, and ecosystem tools like Everyprompt. We'll also discuss the resources available for those interested in learning more about GPT, including other people’s prompts.
NoCode, Data & AI LLM Inside Bootcamp: Episode 6 - Design Patterns: Retrieval Augmentation with LLMs
1. NoCode, Data & AI
LLM Inside Bootcamp
Design Patterns: Retrieval
Augmentation with LLMs
The Next Frontier in AI: Retrieval Augmentation
Rahul Xavier Singh Anant Corporation
Nocode Data & AI
2. Retreival Augmented or
Data Augmented LLM
responses allow you to get
accurate answers from your
own data instead of
hallucinated answers.
4. NoCode, Data & AI
LLM Inside Bootcamp
with Cassandra
Full day bootcamp to familiarize product managers, software
professionals, and data engineers to creating next generation
experts, assistants, and platforms powered by Generative AI
with Large Language Models (LLM, OpenAI, GPT)
Rahul Xavier Singh Anant Corporation
Nocode Data & AI
kono.io/bootcamp
5. Agenda
● I: Strategy & Theory
● II: LLM Design Patterns
● III: NoCode/Code LLM Stacks
● IV: Build a Custom ChatBot
with LLM your Data
8. What is data / retrieval augmentation?
● Information Retrieval : For data that we have not
trained in an LLM, we can get data from another source.
● Contextual Relevance : If the data we retrieve is
relevant to the user’s query, we only send what we need
to manage context length limits.
● Enhanced Capability : This allows us to get data from
different sources & systems to meet the needs of the
user.
● Dynamic Learning: Unlike training or fine tuning which
takes days & hours, we can have new data available
immediately.
9. Prompting Techniques: ReAct
1. Reasoning / Acting (ReAct) Continues to build on CoT reasoning, but
enhances it by acting as in getting other information that can help it.
2. It can act by asking more questions by itself, or potentially going out to
outside systems. This is the basis of systems like Langchain, LllamaIndex
ReAct: Synergizing Reasoning and Acting in Language Models
https://react-lm.github.io/
10. Using information retrieval with LLMs.
● Your code intercepts User’s Query
● Talks to an Information Retrieval System
○ Search Index
○ SQL Database
○ API …
● Constructs a prompt with the query, the “context” that
it got from the IR system
● Sends that new constructed prompt to the LLM
● Gets the answer, formats it, and sends it back to the
user.
12. ● You preprocess embeddings of your data into a vector
database with an LLM
● Your code intercepts User’s Query, embeds it with an
LLM
● Find similar documents from a vector database
● Constructs a prompt with the query, the “context” that it
got from the vector database
● Sends that new constructed prompt to the LLM
● Gets the answer, formats it, and sends it back to the
user.
Vectorized data augmentation
13. Vector Information Retrieval
Augmentation - Part 0
https://blog.christianperone.com/2013/09/machine-learnin
g-cosine-similarity-for-vector-space-models-part-iii/
https://milvus.io/blog/scalable-and-blazing-fast-similarity-
search-with-milvus-vector-database.md
● Vector databases
seem like the best
“memory” for
machine learning.
17. Before LLM Engineering, Machine
Learning was Hard
https://planetcassandra.org/post/building-an-infinitely-smart-ai-powered-by-the-worlds-largest-scalable-datab
ase-apache-cassandra-part-1/
18. Now Making Intelligent Platforms is a lot
easier.
https://planetcassandra.org/post/building-an-infinitely-smart-ai-powered-by-the-worlds-largest-scalable-datab
ase-apache-cassandra-part-1/
20. 20
Key Takeaways: Retrieval Augmentation
Prompt Engineering
Software Engineering
Use Open Frameworks
Data Engineering
- The core of LLM Frameworks is retrieval
augmentation.
- The first pillar of retrieval augmentation
is made up of prompt engineering to
know how to ask the question.
- The second pillar of retrieval
augmentation is made up of data
engineering, how to prepare the data.
- The last pillar of retrieval is basic
software engineering.
- Try it out on your own first, but quickly go
to a framework
Try it on your own ..
21. 21
Thank you and Dream Big.
Hire us
- Design Workshops
- Innovation Sprints
- Service Catalog
Anant.us
- Read our Playbook
- Join our Mailing List
- Read up on Data Platforms
- Watch our Videos
- Download Examples