6. Common AI application (usage area)
Virtual Assistants
Recommendation Systems (websites)
Natural Language Processing (NLP)
Image Recognition
Healthcare
Finance
7. AI Category - in general
Narrow AI: This type of AI is designed to perform a narrow task or a
specific set of tasks. Examples of narrow AI include virtual personal
assistants like Siri and Alexa or image recognition.
General AI: This refers to AI that possesses the ability to understand,
learn, and apply knowledge across a wide range of tasks, similar to
human intelligence. General AI remains largely theoretical and is the
subject of ongoing research and speculation.
8. AI types from learning perspective
Supervised Learning:
In supervised learning, an AI model learns from labeled data, where each input is associated with a
corresponding output label.
Example:
Email spam:
In email spam detection, the AI model learns to classify emails as either spam or non-spam (ham)
based on labeled training data
9. AI types from learning perspective
Unsupervised
Unsupervised learning involves training AI models on unlabeled data, where the model tries to find hidden
patterns or structures in the input data without explicit guidance.
The model is trained on a dataset containing customer attributes such as purchase history, browsing
behavior, age, gender, etc., without any explicit labels.
Reinforcement:
Learn from its environment.
Example: Autonomous Driving:
Reinforcement learning can be used to train autonomous vehicles to navigate complex environments and
make driving decisions.
10. Machine Learning (ML)
Machine learning is a subfield of artificial intelligence (AI) that focuses on developing algorithms
and techniques that enable computers to learn from data and improve their performance on tasks
without being explicitly programmed.
11. Deep Learning
Deep learning is a type of artificial intelligence (AI) that imitates the way the
human brain works to learn from data.
At the heart of deep learning are neural networks, which are computational
models inspired by the structure and function of the human brain
A neural network consists of interconnected nodes, called neurons, organized
into layers.
12. Neural Network vs Deep learning
Link1: https://www.youtube.com/watch?v=UuCTfDvdeoU
Link2: https://www.youtube.com/watch?v=f0t-OCG79-U
14. Type of deep learning
Convolutional Neural Networks (CNNs): CNNs are primarily used for visual
imagery tasks such as image recognition, object detection, and image
classification. They are designed to automatically and adaptively learn spatial
hierarchies of features through the application of convolutional filters.
15. Type of deep learning
Recurrent Neural Networks (RNNs): RNNs are designed to work with
sequential data, such as time series, natural language, and audio. They have
connections that form directed cycles, allowing them to exhibit temporal dynamic
behavior.
16. Type of deep learning
Deep Reinforcement Learning (DRL): DRL combines deep learning techniques
with reinforcement learning, a type of machine learning where an agent learns to
interact with an environment to achieve a goal. DRL has been successful in
training agents to play complex video games, control robots, and make decisions
in various domains.
17. Type of deep learning
Transformers: Transformers are a type of deep learning architecture that has
gained popularity in natural language processing (NLP). They rely on attention
mechanisms to weigh the importance of different input tokens when processing
sequences of data. Transformer-based models, such GPT (Generative Pre-trained
Transformer), have achieved state-of-the-art results in tasks like language
translation, text generation, and question answering.
18. NLP (Natural language Process)
is a machine learning technology that gives computers the ability to interpret,
manipulate, and comprehend human language
or
NLP is a field of AI that focuses on the interaction between computers and
humans through natural language. It involves the development of algorithms and
techniques to enable computers to understand, interpret, generate, and respond
to human language in a meaningful way.
20. Fundamentals of NLP
Text Normalization:
Text normalization involves converting text to a canonical form to make it uniform and
easier to process.
Techniques include lowercase conversion, removing punctuation and lemmatization.
Example:
Input: "He runs quickly, running and runner."
Output: "he run quick run and runner"
21. Fundamentals of NLP
Tokenization:
Tokenization is the process of breaking text into smaller units, such as words or
subwords, called tokens.
Example:
Input: "Natural Language Processing is fascinating!"
Output: ["Natural", "Language", "Processing", "is", "fascinating", "!"]
22. Fundamentals of NLP
Part-of-Speech (POS) Tagging:
POS tagging assigns grammatical categories (e.g., noun, verb, adjective) to each
word in a sentence.
Example:
Input: "The cat is sleeping on the mat."
Output: [("The", "AT"), ("cat", "NN"), ("is", "VB"), ("sleeping", "VBN"), (
23. Fundamentals of NLP
Text Classification:
Text classification categorizes text into predefined classes or categories based on
its content.
Example:
Input: "Is this email spam or not?"
Output: Spam
24. Fundamentals of NLP
Language Translation:
Language translation translates text from one language to another while
preserving meaning.
Example:
Input: "Bonjour, comment ça va?"
Output: "Hello, how are you?"
25. Fundamentals of NLP
Text Generation:
Text generation generates new text based on a given prompt or input.
Example:
Input: "What is NLP?,"
Output: "NLP stands for Natural Language Processing."
26. LLMs (Large Language Models)
Is a class of machine learning models that are trained on vast amounts of text
data to understand and generate human-like language.
● ChatGPT
● Google Gemini
● MS Copilot
● X Grok
Generative AI encompasses a wide range of techniques and models that are
capable of generating new data samples across various domains.
LLMs is part of Generative AI