This document presents specific use cases for Agentic AI (Artificial Intelligence), featuring Large Language Models (LLMs), Generative AI, and snippets of Python code alongside each use case.
Agentic AI UseCases with LLM models
Contents
Agentic AI Use Cases with LLM models...................................................................................................1
Introduction ........................................................................................................................................2
Virtual Machine...............................................................................................................................2
Software used .................................................................................................................................2
Getting started ................................................................................................................................2
Creating a Basic AI Agent ....................................................................................................................3
Overview.........................................................................................................................................3
Code Snippet with Python modules................................................................................................3
Building a Personal AI Assistant ..........................................................................................................4
Overview.........................................................................................................................................4
Code Snippet with Python modules................................................................................................4
Creating an AI-Powered Web Scraper Agent ......................................................................................5
Overview.........................................................................................................................................5
Code Snippet with Python modules................................................................................................5
Building AI-Powered Document Reader along with a Q&A Bot..........................................................6
Overview.........................................................................................................................................6
Code Snippet with Python modules................................................................................................6
Conclusion...........................................................................................................................................7
References...........................................................................................................................................7
2.
Introduction
This document presentsspecific use cases for Agentic AI (Artificial Intelligence), featuring Large
Language Models (LLMs), Generative AI, and snippets of Python code alongside each use case.
Agentic AI is a class of artificial intelligence that focuses on autonomous systems that can make
decisions and perform tasks without human intervention. The field is closely linked to agentic
automation, also known as agent-based process management systems, when applied to process
automation. Applications include software development, customer support, cybersecurity and
business intelligence, among others.
Virtual Machine
The demos in this article were performed on a virtual machine instance having specs given below:
• VM: Ubuntu 22
• Hard Disk: 200 GB Thin provisioned
• RAM: 12 GB RAM
• CPUs: 8 vCPUs
• Network: vNIC
Software used
• Ollama – Get up and running with LLMs (Large Language Models)
• Python computer language and required libraries
• VS Code – Visual Studio Code is the chosen IDE (Integrated Development Environment)
• Browser based usage – Firefox on VM by default, and Google Chrome for remote access
To make it simple, this article presents 4 use cases with following sections devoted to each of them.
• Overview
• Code Snippet having Python modules
• Figures including screenshots
Getting started
The first step is to have ollama installed on the VM. Ollama is available for macOS, Linux, and
Windows. Below commands summarize the ollama setup on Linux.
linux@linux-vm:~$ ollama -v
ollama version is 0.6.5
linux@linux-vm:~$ ollama pull mistral
linux@linux-vm:~$ ollama list
NAME ID SIZE MODIFIED
mistral:latest f974a74358d6 4.1 GB 2 minutes ago
Next step is to ensure Python language availability and installing the required libraries.
linux@linux-vm:~$ python3 --version
Python 3.10.12
linux@linux-vm:~$ sudo apt install python3-pip
linux@linux-vm:~$ pip install speechrecognition pyttsx3 langchain_ollama
Python libraries required by each use case is given in the corresponding sections in the pages that
follow.
3.
Creating a BasicAI Agent
Overview
At the very basic level, a large language model (LLM) is a type of machine learning model designed
for natural language processing tasks such as language generation. LLMs are language models with
many parameters, and are trained with self-supervised learning on a vast amount of text.
Mistral and Llama 3, mentioned herein are both open-source large language models (LLMs), but they
differ in their focus and strengths. Mistral prioritizes efficiency and speed, making it well-suited for
real-time applications and resource-constrained environments. Llama 3, on the other hand, excels in
large-scale tasks, complex reasoning, and high-quality content generation, targeting enterprise-level
applications and research.
This application allows users to enter search queries and get answers generated from the LLM.
Code Snippet with Python modules
### Basic AI Agent with WEB UI
import streamlit as st
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.prompts import PromptTemplate
from langchain_ollama import OllamaLLM
# Load AI Model
llm = OllamaLLM(model="mistral") # Change to "llama3" or another Ollama model
…
…
Figure: A Basic AI Agent
4.
Building a PersonalAI Assistant
Overview
Have you ever imagined having your own voice enabled personal assistant! This application allows us
to speak in a web browser interface and get a response to the query. It is similar to the basic AI agent
seen previously, where the query needs to be typed, whereas here, a more intuitive voice interface
does the job, instead of using our hands for typing.
Code Snippet with Python modules
import streamlit as st
import speech_recognition as sr
import pyttsx3
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.prompts import PromptTemplate
from langchain_ollama import OllamaLLM
…
…
Figure: AI Voice Assistant
5.
Creating an AI-PoweredWeb Scraper Agent
Overview
Web scraping, or web harvesting is a method to extract and store data from websites using HTTP
(Hypertext Transfer Protocol). Here, the web scraper crawls the given website URL, extracts and
stores the data in a vector database, and provides a summarized version of the website. From the
image given below, the web URL “https://en.wikipedia.org/wiki/Artificial_intelligence” was provided
to be crawled.
The FAISS (Facebook AI Similarity Search) vector database has been implemented here to store,
retrieve, and process knowledge for the AI assistant. FAISS is an open-source library designed for
efficient similarity search and clustering of dense vectors. It is used to build indexes and perform
searches quickly and with memory efficiency, even for very large datasets.
Code Snippet with Python modules
import requests
from bs4 import BeautifulSoup
import streamlit as st
import faiss
import numpy as np
from langchain_ollama import OllamaLLM
from langchain_huggingface import HuggingFaceEmbeddings # Updated Import
from langchain_community.vectorstores import FAISS
from langchain.text_splitter import CharacterTextSplitter
from langchain.schema import Document
…
…
Figure: AI-Powered Web Scraper with a Vector Database
6.
Building AI-Powered DocumentReader along with a Q&A Bot
Overview
Ever wondered having a library of your eBooks, documents and other essential text files! You can
now leverage this use case and build your own search engine.
This application allows a PDF to be uploaded, summarises the text in the PDF file, and allows for
queries to be made relevant the file. Thereafter you can ask a question and based on the context, an
answer is provided by leveraging an LLM model. For the demo, a PDF file of standard 10th
Science
Book chapter titled “Chemical Reactions and Equations” was uploaded. Later, a question was asked
relevant to the text in the PDF. See how intelligently the application answers!!
Code Snippet with Python modules
import streamlit as st
import faiss
import numpy as np
import PyPDF2
from langchain_ollama import OllamaLLM
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain.text_splitter import CharacterTextSplitter
from langchain.schema import Document
…
…
Figure: AI Document Reader and Q&A Bot
7.
Figure: AI-Generated Summarywith Q&A
Conclusion
While these use cases are a starting point to experiment with LLMs, Ollama, and Python
programming with LLM extensions and modules, the scope and potential of Generative AI and
Artificial general intelligence is immense.
References
https://ollama.com
https://github.com/ollama/ollama
https://en.wikipedia.org/wiki/Web_scraping
https://github.com/facebookresearch/faiss
https://ai.meta.com/tools/faiss
https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search
https://en.wikipedia.org/wiki/Artificial_intelligence
https://en.wikipedia.org/wiki/Agentic_AI
https://en.wikipedia.org/wiki/Large_language_model