Machine learning can be used to improve digital marketing in several ways:
1) Targeting - Machine learning can identify the most likely customers to respond to a message.
2) Personalization - Machine learning can deliver tailored content and offers based on each customer's interests.
3) Prediction - Machine learning can predict customer behavior such as purchasing habits.
4) Budget optimization - Machine learning can optimize marketing budget allocation.
5) Measurement - Machine learning can measure the effectiveness of marketing campaigns.
CNIC Information System with Pakdata Cf In Pakistan
The Revolution of Digital Marketing in the Artificial Intelligence era
1. The Revolution of Digital Marketing in the Artificial
Intelligence era
By: Eng. Mohamed Hanafy
2. How is Artificial Intelligence (AI) changing the Market world?
• Ad targeting and ads media analysis
• Identifying micro-influencers
• Data Analysis with LLMs
• Intelligent Customer Engagement
• Sentiment analysis
• Churn predictive
• Dynamic pricing
• Chatbots
• AI-driven content
• Text Generation
• Music Generation
• Generating Songs withVocals
• Text to Image
• Graphic Design
• AI video generation
• Personalized design
3. Introduction to AI
• Introduction to AI
• What is AI?
• History of AI
• Types ofAI
• Applications of AI
• Search and Reasoning
• Search algorithms
• Heuristics
• Logic and reasoning
• Machine Learning
• Supervised learning
• Unsupervised learning
• Reinforcement learning
• Natural Language Processing
• Text classification
• Named entity recognition
• Machine translation
• ComputerVision
• Object detection
• Image classification
• Face recognition
4. What is AI?
• AI is a branch of computer science that deals with the creation of intelligent
agents, which are systems that can reason, learn, and act autonomously.
• AI research has been highly successful in developing effective techniques for
solving a wide range of problems, from game playing to medical diagnosis.
• However, there is still no consensus on what constitutes true AI, and many
experts believe that we are still many years away from creating truly
intelligent machines.
5. History of AI
• The field of AI was founded in the 1950, when a group of researchers at
Dartmouth College proposed the creation of a "thinking machine."
• Early AI research was focused on developing symbolic AI systems, which were
designed to mimic human reasoning.
• However, these systems were often brittle and difficult to scale, and they did
not achieve the same level of success as other areas of computer science.
• In the 1980, there was a shift in AI research towards machine learning, which
is a field that focuses on developing systems that can learn from data.
• Machine learning has been much more successful than symbolic AI, and it has
led to the development of many powerfulAI systems, such as Google's search
engine and Amazon's recommendation engine.
6.
7.
8. Types of AI
• There are many different types of AI, but they can generally be divided into two
categories: narrow AI and general AI.
• Narrow AI systems are designed to solve a specific problem, such as playing chess
or Go.
• General AI systems are designed to be more versatile and can be used to solve a
wider range of problems.
• However, general AI systems are still under development, and there is no
consensus on whether they will ever be possible to create.
9. Search algorithms
• There are many different types of search algorithms, each with its own advantages and
disadvantages. Some of the most common search algorithms include:
• Linear search: This algorithm checks every record in a data structure until it finds the target record. It
is the simplest search algorithm, but it is also the least efficient.
• Binary search: This algorithm divides the search space in half at each step and then checks the half
that is more likely to contain the target record. It is more efficient than linear search, but it requires
the data structure to be sorted.
• Hashing: This algorithm uses a hash function to convert a key into a value that is used to index into a
data structure.This can be very efficient for finding records, but it requires the data structure to be
hashed.
10. Traveling Salesman Problem (TSP)
• The number ofTSP
routes grows
exponentially with the
number of cities. For 10
cities, there are over
300,000 routes and for
15 cities, over 87 billion
routes.
11. Heuristics
A heuristic is a mental shortcut that allows us to make quick judgments or decisions based on
limited information. Heuristics are often used in everyday life, such as when we are trying to decide
which restaurant to eat at or which movie to watch.
There are many different types of heuristics, but some of the most common ones include:
• Availability heuristic:This heuristic states that we tend to judge the probability of an event based
on how easily examples of that event come to mind. For example, if we can easily think of
several examples of people who have been in car accidents, we may be more likely to believe
that car accidents are common.
• Representativeness heuristic:This heuristic states that we tend to judge the probability of an
event based on how similar it is to our mental model of a typical example of that event. For
example, if we see a man with a beard and a long white robe, we may be more likely to believe
that he is a rabbi, even if we know that there are other people who fit that description who are
not rabbis.
• Anchoring heuristic:This heuristic states that we tend to rely too heavily on the first piece of
information we are given when making a decision. For example, if we are asked to estimate the
number of jelly beans in a jar, we may be more likely to give a number that is close to the first
number we see, even if it is not accurate.
• Heuristics can be helpful in making quick decisions, but they can also lead to errors. It is
important to be aware of the limitations of heuristics and to use them with caution.
12. Logic and reasoning
• Reasoning is the process of using logic to reach a conclusion.When we reason, we start
with a set of premises, which are statements that we believe to be true. We then use these
premises to reach a conclusion.The conclusion is a statement that follows logically from
the premises.
• There are many different types of reasoning, but some of the most common ones include:
• Deductive reasoning: Deductive reasoning is a type of reasoning in which the conclusion is
guaranteed to be true if the premises are true. For example, if we know that all dogs are
mammals and that Spot is a dog, then we can deduce that Spot is a mammal.
• Inductive reasoning: Inductive reasoning is a type of reasoning in which the conclusion is
likely to be true, but it is not guaranteed to be true. For example, if we observe that all of
the swans we have seen are white, then we can induce that all swans are white. However, it
is possible that there is a swan that is not white.
• Abductive reasoning: Abductive reasoning is a type of reasoning in which we make the
best explanation for a set of facts. For example, if we find a dead body in the library, we
might abduce that the person was murdered. However, there are other possible
explanations for the person's death, such as a heart attack or an accident.
14. Mathematics
Cloud Computing
AWS
Microsoft Azure
Watson
Software
Algorithms
Data Structures
Programming
DataVisualization
Multivariate Statistics
Statistics
Signals & Systems
Optimization
Probability
Multivariate Calculus
LinearAlgebra
SingleVariableCalculus
Discrete Mathematics
Machine Learning
Deep Learning
Artificial Intelligence
The Roots
15. Machine Learning (ML)
• Machine Learning (ML) is a subfield of artificial intelligence (AI).
• It focuses on the use of data and algorithms to enable computers to learn and
improve their performance on a specific task without being explicitly programmed.
• Machine learning algorithms can be trained on a dataset to make classifications or
predictions, and to uncover key insights in data mining projects.
• There are several types of machine learning, including supervised learning,
unsupervised learning, and reinforcement learning.
• Each type involves different approaches to training algorithms and making
predictions or decisions based on data.
16. Types of Machine Learning (ML)
• Supervised learning is a methodology in data science that creates a model to
predict an outcome based on labeled data. Labeled data contains a collection
of variables (features) and a specific output that we are trying to predict.
• Unsupervised learning is a type of machine learning that involves finding
patterns or relationships in data without using labeled data. It is often used
for exploratory analysis and anomaly detection because it helps to see how
the data segments relate and what trends might be present.
• Reinforcement learning is a type of machine learning that involves an agent
learning to make decisions based on rewards and punishments.The agent is
not given explicit feedback about what is right or wrong but must learn from
its own actions and experiences.
18. The Predictive Data Analytics Project Lifecycle: CRISP-DM
Cross Industry Standard Process for Data Mining (CRISP-DM)
analytics base table (ABT)
19. Predictive Data AnalyticsTools
• Application-based, or point-and-click tools
• IBM SPSS
• SAS Enterprise Mine
• Knime Analytics Platform
• RapidMiner Studio
• Weka
• Programming language
• R
• Python
20. Aim
• to predict four different customer life events: moving,
birth of a child, new relationship, and end of a
relationship.
• Reveal a pseudo-social network that supports the derivation of
behavioral similarity measures.To advance decision support
systems literature
• this study validates the proposed customer life event prediction
model in a real-world setting in the financial services industry.
Objectives
Leveraging fine-grained
transaction data for customer life
event predictions
21. • Combines aggregated customer data including
customer demographics, behavior and contact with
the firm, with fine-grained transaction data
• large European financial services provider
• —approximately 60 million debit transactions
involving around 132,000 customers to 1.5 million
different counterparties over a one-year period—
Dataset
24. • First, the components pertaining to the seed customers, which we called
behavioral similarity terms, are calculated differently.
• Second, two variants down-weight popular merchants based on inverse
consumer frequency (ICF) or a cross-validated beta distribution.
25. ?
• How can machine learning be used to improve the targeting of marketing
campaigns?
• How can machine learning be used to personalize the customer experience?
• How can machine learning be used to predict customer behavior?
• How can machine learning be used to optimize marketing budgets?
• How can machine learning be used to measure the effectiveness of
marketing campaigns?
26. Machine learning can be used to improve digital
marketing in a variety of ways.
• Targeting: Machine learning can be used to identify the most likely
customers to respond to a particular message.
• Personalization: Machine learning can be used to deliver content and offers
that are tailored to each individual's interests.
• Prediction: Machine learning can be used to predict customer behavior,
such as which products they are likely to buy or when they are likely to make
a purchase.
• Budget optimization: Machine learning can be used to allocate res
28. What is Deep Learning?
Machine Learning focuses only on solving real-world problems. It also takes a few ideas
from Artificial Intelligence. Machine Learning goes through the Neural Networks that are
designed to mimic human decision-making capabilities. ML tools and techniques are the
two key narrow subsets that only focus more on Deep Learning. We need to apply it to
solve any problem that requires thought — human or artificial. Any Deep Neural Network
will consist of three types of layers:
• The Input Layer
• The Hidden Layer
• The Output Layer
We can say Deep Learning is the newest term in the field of Machine Learning. It’s a way
to implement Machine Learning.
29. Deep LearningTheory
System is “dumb” (i.e. mechanical)
“Learns” with big data (lots of input examples) and trial-and-error
guesses to adjust weights and bias and establish key features
Creates a predictive system to identity new examples
SameAI argument: big enough data makes a difference (“simple” algorithms run over large data
sets)
Input: Big Data (e.g.;
many examples)
Method: Trial-and-error
guesses to adjust node weights
Output: system identifies
new examples
5
31. Deep Neural Networks
• Layers
• Neural networks typically organize their neurons into layers.When we
collect together linear units having a common set of inputs we get
a dense layer.
• The Activation Function
• An activation function is simply some function we apply to each of a
layer's outputs (its activations).
• The most common is the rectifier function max(0,x)
32. Deep Neural Networks
Python Code (keras)
• from tensorflow import keras
• from tensorflow.keras import layers
• model = keras.Sequential([
• # the hidden ReLU layers
• layers.Dense(units=4, activation='relu', input_shape=[2]),
• layers.Dense(units=3, activation='relu'),
• # the linear output layer
• layers.Dense(units=1),
• ])
33. Deep Learning Architectures
• RNN: Recurrent Neural Networks
• LSTM: Long Short-Term Memory
• CNN: Convolutional Neural Networks
• DBN: Deep Belief Network
• DSN: Deep Stacking Network
37. Natural language processing (NLP)
Natural language processing (NLP) is a field of computer science that studies
how computers can understand and process human language. NLP is a
subfield of artificial intelligence (AI).
• Machine translation
• Speech recognition
• Question Answering
• Sentiment analysis
• Text summarization
• Information Extraction
natural language
understanding (NLU)
natural language
generation (NLG)
key techniques NLP
• Named Entity Recognition
• Sentiment Analysis
• Text Summarization
• Aspect Mining
• Topic Modeling
38. Large Language Models(LLM)
• LLMs are language models consisting of a neural network with many parameters (typically
billions of weights or more).
• They are trained on large quantities of unlabeled text using self-supervised learning or
semi-supervised learning.
• LLMs emerged around 2018 and perform well at a wide variety of tasks.
• They have shifted the focus of natural language processing research away from the
previous paradigm of training specialized supervised models for specific tasks.
• LLMs are general-purpose models that excel at a wide range of tasks, as opposed to being
trained for one specific task.
• The skill with which they accomplish tasks and the range of tasks at which they are capable
seems to be a function of the amount of resources (data, parameter-size, computing
power) devoted to them.
• Though trained on simple tasks such as predicting the next word in a sentence, neural
language models with sufficient training and parameter counts are found to capture much
of the syntax and semantics of human language.
• In addition, large language models demonstrate considerable general knowledge about the
world and are able to “memorize” a great quantity of facts during training.
39. Text pre-processing
• Stop word removal involves removing common words that do not carry
significant meaning, such as “and”, “the”, “is”, etc.These words are called stop
words and are usually removed from the text to reduce noise and improve the
performance of natural language processing algorithms.
• Tokenization is the process of breaking down a text into individual units called
tokens.These tokens can be words, phrases, or even sentences, depending on
the level of granularity required for the task at hand.Tokenization is an
essential step in text preprocessing as it allows algorithms to work with text
data at a more granular level.
• Stemming is the process of reducing words to their base or root form. For
example, the words “running”, “runs”, and “ran” all have the same root form
“run”. Stemming algorithms work by removing the suffixes (and in some cases
prefixes) from a word to obtain its root form.This can help reduce the
dimensionality of the data and improve the performance of natural language
processing algorithms.
40. Tokenization
• Tokenization is the process of breaking down a text into individual units called tokens.These
tokens can be words, phrases, or even sentences, depending on the level of granularity required
for the task at hand.Tokenization is an essential step in text preprocessing as it allows algorithms
to work with text data at a more granular level.
Tools for tokenization
• NLTK (Natural LanguageToolkit)
• TextBlob
• spaCy
• Stanford CoreNLP
• Gensim
41. Embedding
• embedding refers to the process of representing words, phrases, or even
entire documents as fixed-length vectors of real numbers.These vectors
are designed to capture the meaning and context of the text data in a way
that can be easily understood and manipulated by machine learning
algorithms.
42. Embedding techniques
• Word2Vec: Developed by Google,Word2Vec is a neural network-based model that
learns word embeddings from text data. It uses a skip-gram or continuous bag-of-
words (CBOW) architecture to predict the context words given a target word or
vice versa.
• GloVe: GloVe (GlobalVectors forWord Representation) is another word embedding
technique developed by Stanford University. It combines global matrix
factorization and local context window methods to learn word embeddings from
co-occurrence statistics.
• fastText: Developed by Facebook, fastText is a word embedding technique that
extends the Word2Vec model by incorporating subword information.This allows it
to learn embeddings for rare words and even out-of-vocabulary words by
representing them as a sum of their character n-grams.
43. Transformer (machine
learning model)
• The transformer is a type of deep
learning architecture introduced in a
2017 paper byVaswani et al.
• It uses self-attention mechanisms to
weigh the significance of different
parts of the input data.
• The transformer architecture follows
an encoder-decoder structure and
consists of two main components: the
encoder stacks and the decoder stacks.
• It has been implemented in standard
deep learning frameworks such as
TensorFlow and PyTorch.
• The transformer has been widely
adopted for natural language
processing tasks and has also been
applied to computer vision tasks.
“Attention Is AllYou Need”
44. Large Language Models(LLM)
• GPT-3 (Generative Pre-trainedTransformer 3) developed byOpenAI.
• BERT (Bidirectional Encoder Representations fromTransformers) developed by Google AI.
• RoBERTa (RobustlyOptimized BERT Approach) developed by Facebook AI.
• T5 (Text-to-TextTransferTransformer) developed by Google AI.
• CTRL (ConditionalTransformer Language Model) developed by Salesforce Research.
• Megatron-Turing NLG developed by Google AI and NVIDIA.
• Ernie 3.0Titan developed byTencent AI Lab.
• Claude developed by Hugging Face.
• GLaM (Generalist Language Model) developed by Google AI.
45. Open Source Resources for Large Language
Models
• Hugging Face: Hugging Face provides a variety of pre-trained Large
Language Models and tools for fine-tuning them.
• TensorFlow:TensorFlow provides several Large Language Models, such as
BERT and GPT-2, along with tools for training and fine-tuning them.
• PyTorch: PyTorch provides several Large Language Models, such as
Transformer-XL and GPT-2, along with tools for training and fine-tuning
them.
46. It is the third-generation language prediction model in the
GPT-n series created by OpenAI, a San Francisco-based
artificial intelligence research laboratory.
47. GPT-3
• GPT-3 was introduced by Open AI earlier in May 2020 as a
successor to their previous language model (LM) GPT-2.
• Generative Pre-trainedTransformer 3 (GPT-3) is an
autoregressive language model that uses deep learning to
produce human-like text.
• OpenAI trained GPT-3 on a massive corpus of text with
more than 175 billion parameters, making it the largest
language model
• Trained on a massive dataset (from sources like Common
Crawl,Wikipedia, and more).
• GPT-3 has seen millions of conversations and can calculate
which word (or even character) should come next in
relation to the words around it.
48. GPT-3 Dataset
• The CommonCrawl data was downloaded from 41 shards of monthly
CommonCrawl covering 2016 to 2019.
• Constituting 45TB of compressed plaintext before filtering and 570GB after
filtering
• Roughly equivalent to 400 billion byte-pair-encoded tokens.
51. What capabilities does GPT-3 AI offer?
• LanguageTranslation
• Text Classification
• Sentiment Extraction
• Reading Comprehension
• Named Entity Recognition
• Question Answer Systems
• News Article Generation
52.
53.
54. Background to BERT
• Bidirectional Encoder Representations from Transformers (BERT) is a machine learning model, which
uses natural language processing techniques to transform text. BERT is pre-trained by Google.
• Context-free models such as word2vec or GloVe generate a single word embedding representation for
each word in the vocabulary, where BERT takes into account the context for each occurrence of a given
word.
• This enables better contextualization of the model’s output, which considers not only the importance of
the word and how frequently it’s used but also the context of the use.
• In 2019, Google Search started applying BERT models for search queries in English. In three months'
time, BERT had already expanded to over 70 languages. A year after its introduction, almost every single
English-based query was processed by BERT.
55. GPT-3 vs BERT
• GPT-3 has been trained on 175 billion parameters, while BERT has been trained on
340 million parameters
• BERT requires elaborate fine-tuning, while GPT-3 uses few-shot learning to quickly
predict output results with minimal input
• GPT-3 isn't publicly available (you need to be accepted to OpenAI's waitlist),
whereas BERT is a publicly accessible open-sourced model
58. Website Meta Descriptions Using Python And BERT
• AI can be used to automatically generate
meta descriptions for web pages
• Several tools available that use AI to
generate meta descriptions
• Tools typically require you to input the
title and target keyword for your post
• AI then generates a meta description for
you
• Examples of tools include: SEO Toolbelt,
Dashword, Frase.io, and Copy.ai
62. Important note about the model’s capabilities to
process text
• One important thing to note is that BERT will only be able to generate a meta
description for texts, longer than 400 words.This also corresponds with Google’s
understanding of thin pages.
• Pages with less than 400 words will typically be flagged in the audits of tools such
as Screaming Frog or Sitebulb. Ideally, you should strive on providing sufficient
content on all of your indexable pages for both users and search engines to
navigate your site successfully.
Then check if the keyword is contained within the meta
description and if not (yet it is relevant to the content on
the page), try to add it in a sentence.
64. Revolutionizing Data Analysis with Large Language Models
• LLMs are transforming data engineering by
providing a powerful tool for processing and
understanding large volumes of text-based
data.
• With advanced NLP capabilities, LLMs can
help extract valuable insights from
unstructured data.
• LLMs can be integrated into data
engineering workflows to enhance data
processing, improve data quality, and
increase the speed and accuracy of data
analysis.
• LLMs have the ability to integrate with a
large number of data sources.
66. Introducing Microsoft 365 Copilot
• Microsoft has introduced a new capability for LLMs
called Microsoft 365 Copilot.
• Copilot combines the power of LLMs with your data
in the Microsoft Graph and the Microsoft 365 apps to
turn your words into a powerful productivity tool.
• Copilot works alongside you in the Microsoft 365
apps to unleash creativity, unlock productivity, and
uplevel skills.
• For example, Copilot in Word can write, edit,
summarize, and create right alongside you.
69. • Micro-influencers are individuals with a
smaller but more engaged following on
social media, who are considered experts in
their respective niche.
• Micro-influencer marketing has become a
popular and effective strategy for
marketers.
• Micro-influencers can have a bigger impact
on purchasing decisions than top influencers
due to their proximity, accessibility, and
shared experiences with their followers.
Identifying micro-influencers
71. Predictive Analytics for Customer Behaviour
• Uses historical data and machine learning to
predict future customer behavior
• Lead scoring: predicts which leads are most
likely to convert
• Predicts consumer behaviors and assesses
Customer Lifetime Value (CLV)
• Predicts optimal pricing and frequency for
posting
• Predicts customer defection
73. Suggested predictive audiences
• Purchasing users who are likely to not visit your
property in the next 7 days.
• Users who are likely to not visit your property in the
next 7 days.
• Users who are likely to make a purchase in the next
7 days.
• Users who are likely to make their first purchase in
the next 7 days.
• Users who are predicted to generate the most
revenue in the next 28 days.
Predictive audiences
78. • Deep learning techniques have been used for music
generation for the past two decades
• Recurrent Neural Networks (RNNs) and Convolutional Neural
Networks (CNNs) have been used to generate music
• Deep generative models such asVariational Autoencoders
(VAEs) and Generative Adversarial Networks (GANs) can be
used to create realistic synthetic data
• These models have been used to create novel neural network
architectures that can generate new music
Tools
•AIVA
• Amper Music
•Jukebox by OpenAI
•DeepComposer by AWS
Music Generation Using Deep Learning
86. AITools for Generating Songs withVocals
• Several tools and applications available that use
AI to generate songs with vocals
• Popular tools includeVoicemod and Uberduck
• These tools use deep learning algorithms to
generate realistic-sounding vocals
Heart On My Sleeve
89. Whisper
OpenAI'sWhisper is a speech to text, or
automatic speech recognition model. It
is a "weakly supervised" encoder-
decoder transformer trained on 680,000
hours of audio. Not only can it transcribe
English, it can transcribe 96 other
languages along with also being able to
translate from those languages to
English.
93. AI tools for generating text to speech
• There are several AI tools available that can generate text to speech.
• These tools use advanced machine learning algorithms to convert written text into natural-sounding
speech.
• Some popularAI tools for generating text to speech include :
• Google CloudText-to-Speech,
• Microsoft AzureText to Speech
• Murf AI
• These tools offer features such as customizable voices, fine-grained audio controls, and support for
multiple languages.
• They can be used to create voiceovers for videos, podcasts, and professional presentations.
107. AI in Graphic Design: Supporting Creativity and SavingTime
• AI can support designers and non-designers in
creating beautiful designs.
• AI can expedite the design process, saving time
for other projects or tasks.
• AI can automate repetitive tasks, suggest design
options, and provide data-driven insights.
• AI can be used for image editing, classification,
color manipulation, and font design.
109. AI video generation
• AI video generation technology allows you to produce
videos by typing in text and choosing characters and
voice.
• AI video generation platforms can create professional
videos quickly without equipment or editing skills.
• AI can automate repetitive tasks, suggest design
options, and provide data-driven insights.
• There are different types of AI video generators,
including video editors with AI tools, generative text-to-
video apps, and video productivity apps.
• Tools:
• synthesia.io
• movio.la
113. • There has to be a source and a destination.
• You can think of it as the source from which we cut out the face and paste it onto
the destination.
• Collect a database of both the person you are trying to put a face on, and of the
person from which you will borrow the face.
• Extracting Frames
• You can use ffmpeg
• DeepFaceLab
• https://github.com/iperov/DeepFaceLab
• Isolating Faces
• Sorting Faces
• Training the Model
• Merging the Faces
• Compositing
Step By Step Deepfake Guide
RNN: Recurrent Neural Networks
RNNs are a type of neural network that can process sequences of data. They are typically used for natural language processing (NLP) tasks, such as machine translation and text summarization. RNNs work by passing the output of one layer to the input of the next layer, which allows them to learn long-range dependencies in the input sequence.
LSTM: Long Short-Term Memory
LSTMs are a type of RNN that are specifically designed to learn long-range dependencies. They do this by using gates that control the flow of information through the network. LSTMs have been shown to be very effective for NLP tasks, such as machine translation and text summarization.
CNN: Convolutional Neural Networks
CNNs are a type of neural network that are specifically designed for image processing tasks. They work by applying a series of convolution operations to the input image, which allows them to learn local patterns in the image. CNNs have been shown to be very effective for image classification, object detection, and image segmentation tasks.
DBN: Deep Belief Network
DBNs are a type of neural network that are composed of multiple layers of restricted Boltzmann machines (RBMs). RBMs are a type of neural network that can learn binary features from data. DBNs have been shown to be very effective for a variety of tasks, such as image classification, natural language processing, and speech recognition.
DSN: Deep Stacking Network
DSNs are a type of neural network that are composed of multiple layers of neural networks. Each layer of DSNs is trained on a different task, and the outputs of the layers are then combined to produce a final prediction. DSNs have been shown to be very effective for a variety of tasks, such as image classification, natural language processing, and speech recognition.
What is the use of TensorFlow?
It is an open source artificial intelligence library, using data flow graphs to build models. It allows developers to create large-scale neural networks with many layers. TensorFlow is mainly used for: Classification, Perception, Understanding, Discovering, Prediction and Creation
With the evolving capabilities of IBM Watson and the proliferation of machine learning enabled platforms such as Azure Machine Learning, TensorFlow, and Amazon Machine Learning, etc., access to the power of Machine Learning will become available to more marketers and the integral role that ML plays in the effectiveness and efficiency of digital marketing will continue to increase. Every interaction is a potential machine learning data point and successful marketers and agencies will build the capabilities and hire the resources and partners to help them maximize this opportunity.
NLTK (Natural Language Toolkit): NLTK is a popular open-source library for natural language processing in Python. It provides several functions for tokenizing text, including word_tokenize and sent_tokenize.
TextBlob: TextBlob is another Python library for natural language processing. It provides a simple interface for tokenizing text using the WordTokenizer and SentTokenizer classes.
spaCy: spaCy is a powerful Python library for natural language processing. It provides a fast and accurate tokenizer that can be customized for different languages and tasks.
Stanford CoreNLP: Stanford CoreNLP is a Java-based natural language processing toolkit developed by Stanford University. It provides a range of tools for tokenization, including the PTBTokenizer and WhitespaceTokenizer.
Gensim: Gensim is a Python library for topic modeling and document similarity analysis. It provides a simple tokenizer that can be used to preprocess text data.
https://tawasulforum.org/article/digital-marketing/content-marketing/%D9%83%D8%AA%D8%A7%D8%A8%D8%A9-%D8%A7%D9%84%D9%85%D8%AD%D8%AA%D9%88%D9%89-gpt-3/
كتابة المحتوى باستخدام الذكاء الاصطناعي Gpt-3 ” دليل شامل “
7 دقائق للقراءة
10 مايو , 20221.9k
108+
فريق التحريرالمتابعين: 2814
متابعة
63
المفضلةالقراءة لاحقاً
المحتوى
الذكاء الصنعي وتقنية Gpt-3 لتوليد المحتوى
إنه يساعدك على اكتساب ميزة تنافسية في مشهد التسويق الرقمي المكتظ. يمكنك أتمتة مجموعة واسعة من المهام التي تستغرق وقتًا طويلاً والتركيز على الجوانب الإبداعية لتسويق المحتوى.
الأهمّ من ذلك، يساعدك الذكاء الاصطناعي والتعلم الآلي في معرفة المزيد عن عملائك وبناء علاقات هادفة معهم. من خلال إنشاء محتوى AI عالي الاستهداف وملائم وشخصي، فإنك تعزز تفاعل المستخدم والتحويلات والاحتفاظ به.
دعنا نلقي نظرة عن كثب عن مفهومه وكيفية استخدامه في إنشاء المحتوى وأهمّ المواقع التي تقدمه.
ما هو “GPT-3 AI “GPT-3 Artificial Intelligence؟
تقنية مطوّرة حديثًا أحدثت ثورة في عالم الذكاء الاصطناعي (AI)، ببساطة يعتبرGPT-3 ذكاء اصطناعي أفضل أكثر من أي شيء سبقه في إنشاء محتوى فيه بنية لغوية، لغة بشرية أو لغة آلية, يرمز GPT-3 إلى Generative Pre-trained Transformer3 وهو الإصدار الثالث من الأداة الذي سيتم إصداره.
باختصار، إنّه ينشئ نصًّا باستخدام خوارزميات مدربّة مسبقًا، لقد تمّ بالفعل تزويدهم بجميع البيانات التي يحتاجون إليها لتنفيذ مهمتهم. على وجه التحديد، تمّ تزويدهم بحوالي 570 غيغابايت من المعلومات النصية التي تمّ جمعها عن طريق crawling الزحف إلى الإنترنت (مجموعة بيانات متاحة للجمهور تُعرف باسم CommonCrawl) جنبًا إلى جنب مع النصوص الأخرى المحددة بواسطة OpenAI، بما في ذلك نص Wikipedia.
GPT-3 هو أكبر نموذج لغة يحتوي على 175BN متغير، أي أكثر من 10 أضعاف تلك الخاصة بـ Turing NLG من Microsoft .
من هو مؤسس GPT-3 AI؟
تمّ إنشاء GPT-3 بواسطة OpenAI، وهي شركة بحثية شارك في تأسيسها Elon Musk، وقد وُصفت بأنها أهمّ تقدّم مفيد في مجال الذكاء الاصطناعي على مدار سنوات.
الكود نفسه غير متاح للجمهور حتى الآن، والوصول متاح فقط للمطورين المحددَين من خلال واجهة برمجة التطبيقات التي تحتفظ بها OpenAI. منذ أن تم توفير API في حزيران من هذا العام، ظهرت أمثلة على الشعر والنثر والتقارير الإخبارية والخيال الإبداعي.
الذكاء الاصطناعي وأهميته في كتابة المحتوى
للذكاء الاصطناعي في كتابة المحتوى عدد من الفوائد. يتضمن ذلك توفير الوقت وتوفير مجموعة كبيرة من أفكار المحتوى وتحسين الإبداع.
فوائد:
توفير الوقت.
توفير مجموعة كبيرة من أفكار المحتوى .
تحسين الإبداع.
زيادة معدل الإنتاجية
تحسين جودة المعلومات.
توفير المال عبر توفير الوقت واعداد كتاب المحتوى.
يمكن إنتاج محتوى حول أي موضوع في أي وقت .
يمكنهم الوفاء بالمواعيد النهائية بسهولة أكبر وبجهد أقل.
توليد أفكار المحتوى على نطاق واسع. يمكن لكتاب المحتوى التركيز على ما يفعلونه بشكل أفضل – الإبداع والعواطف.
أنواع المحتوى لـ GPT-3 AI
يمكن لـ GPT-3 إنشاء أي شيء له بنية لغوية، مما يعني أنّه:
يمكن لـ GPT-3 الإجابة على الأسئلة وكتابة المقالات وتلخيص النصوص الطويلة وترجمة اللغات وأخذ المذكرات.
يمكن استخدام GPT-3 لإنشاء شعر وقصص وتقارير إخبارية وحوار باستخدام كمية صغيرة فقط من نص الإدخال الذي يمكن استخدامه لإنتاج كميات كبيرة من النسخ عالية الجودة.
يمكن استخدام GPT-3 أيضًا في مهام المحادثة الآلية، والاستجابة لأي نص يكتبه شخص ما في الكمبيوتر بنص جديد يتناسب مع السياق.
لا يقتصر على التلخيصات النصيّة البشرية ولكن يمكنه أيضًا إنشاء تلخيصات نصية تلقائيًا وحتى أكواد البرمجة programming code.
أيضًا يمكن استخدامها في أنظمة الإجابة على الأسئلة.هذا بالطبع، ثوري جدًا، وإذا ثبت أنّه قابل للاستخدام ومفيد على المدى الطويل، فقد يكون له آثار كبيرة على طريقة تطوير البرامج والتطبيقات في المستقبل.
اللغات التي تدعمها GPT-3 AI “جي بي تي 3”
بينما تمّت تحديد بيانات ما قبل التدريب الخاصّة بالإصدار الثاني لـ GPT أي GPT-2 للغة الإنجليزية فقط، تذكر ورقة البحث لـ GPT-3 أن بيانات ما قبل التدريب لم تتم تحديدها، لذا فإن النصّ يتضمن بشكل أساسي أي لغة عند ظهورها على الإنترنت، لذلك فهي في الغالب باللغة الإنجليزية بشكل أساسي (93٪ من حيث عدد الكلمات)، إلّا أنّها تتضمن أيضًا 7٪ من النصوص بلغات أخرى.
تمّ توسيع التحليلات لتشمل لغتين إضافيتين تمت دراستهما بشكل شائع، وهما الألمانية والرومانية.
وحاليا تدعم اكثر من 33 لغة منها : عربي , 🇧🇬 البلغارية 🇨🇳 , الصينية (S) , 🇹🇼 الصينية (T) , 🇨🇿 التشيكية , 🇩🇰 دنماركي ,🇳🇱 هولندي ,🇺🇸 اللغة الإنجليزية ,🇮🇷 الفارسية ,🇵🇭 فلبيني , 🇫🇮 الفنلندية ,🇫🇷 الفرنسية ,🇩🇪 ألماني ,🇬🇷 يوناني ,🇮🇱 عبري ,🇮🇳 الهندية ,🇭🇺 المجرية ,🇮🇩 الأندونيسية ,🇮🇹 إيطالي ,🇯🇵 ياباني ,🇰🇷 كوري ,🇲🇾 الملايو ,🇳🇴 النرويجية ,🇵🇱 البولندية , 🇵🇹 البرتغالية ,🇷🇴 روماني ,🇷🇺 روسي ,🇸🇰 السلوفاكية ,🇪🇸 الاسبانية ,🇸🇪 السويدية ,🇹🇭 تايلاندي ,🇹🇷 تركي ,🇻🇳 فيتنامي
ما هي الإمكانيات التي تقدّمها GPT-3 AI؟
كما ذكرنا، يمكن تنفيذ العديد من مهام البرمجة اللغوية العصبية NLP بواسطة GPT-3 دون أي تدرج أو تحديثات للمعلمات أو الحاجة إلى ضبط دقيق Fine-Tuning:
ترجمة اللغة Language Translation
تصنيف النص Text Classification
استخراج المشاعر Sentiment Extraction
فهم القراءة Reading Comprehension
التعرف على الكيان المحدد Named Entity Recognition
أنظمة الإجابة على الأسئلة Question Answer Systems
إنشاء المقالات الإخبارية News Article Generation
هذا يجعله نموذجًا Task-Agnostic، أي يمكنه أداء المهام دون أي مطالبات أو أمثلة قليلة جدًا أو أمثلة أو عروض توضيحية تسمى اللقطات shots.
إليك ملخص عن بعض الأشياء التي تمت ملاحظتها أثناء التجربة على واجهة API المسماة Playground:
ملاحظتها أثناء التجربة على واجهة API المسماة Playground
الإعدادات والإعدادات المسبقة Settings and Presetsعند النقر فوق رمز الإعدادات، يمكن للمرء تكوين معلمات مختلفة مثل طول النص وشدته(من منخفض/ممل إلى قياسي إلى فوضوي/إبداعي)، وبدء وإيقاف النص الذي تم إنشاؤه وما إلى ذلك، وهناك عدة إعدادات مسبقة للاختيار والتلاعب بها مثل الدردشة، سؤال وجواب، تحليل البيانات غير المهيكلة، تلخيص لطالب الصف الثاني
دردشة Chatيبدو الإعداد المسبق للدردشة أشبه ببرنامج محادثة حيث يمكنك تعيين شخصية الذكاء الاصطناعي كوضع ودود ومبدع وذكي ومفيد يوفر إجابات مفيدة بطريقة مهذبة للغاية ، بينما إذا قمت بتعيين شخصية الذكاء الاصطناعي على وحشية فإنها تستجيب تمامًا كما تقترح الشخصية!
سؤال وجواب Q&Aتحتاج الإجابة على الأسئلة إلى بعض التدريب قبل أن تبدأ في الإجابة على أسئلتنا ولم يكن لدى الأشخاص أي شكاوى بنوع الإجابات التي تلقوها.
تحليل البيانات غير المهيكلة Parsing Unstructured Dataهذا إعداد مسبق مثير للاهتمام للنموذج يمكنه فهم واستخراج المعلومات المنظمة من النص غير المهيكل.
تلخيص للمستوى الثاني Summarize for 2nd Graderيُظهر هذا الإعداد المسبق مستوى آخر من ضغط النص text compression من خلال إعادة صياغة الجمل والمفاهيم الصعبة إلى كلمات وجمل أبسط يمكن حتى للطفل فهمها بسهولة.
معالجة نصوص متعددة اللغات Multilingual text processingيمكن لـ GPT-3 التعامل مع لغات أخرى غير الإنجليزية بشكل أفضل من GPT-2. لقد تمّ تجريب مهامًا بلغات مختلفة، الألمانية والروسية واليابانية، لقد كان أداؤها جيدًا وعلى استعداد تام لمعالجة النصوص متعددة اللغات.
توليد النص Text Generationيمكن توليد قصائد وبنمط معين إذا لزم الأمر، كتابة القصص والمقالات مع بعض التحسين حتى في اللغات الأخرى.
توليد الرموز Code Generationادعى الناس أنً واجهة التطبيقات البرمجية API الخاصّة بـ GTA يمكنها إنشاء الرموز بأقلّ عدد من المطالبات.
كيفية استخدام GPT-3 AI لكتابة المحتوى؟
ليس سراً أن إنشاء المحتوى عملية تستغرق وقتًا طويلاً. حتى بمساعدة أدوات الذكاء الاصطناعي، يستغرق الأمر الكثير من الوقت والجهد لإنشاء جزء احترافي من المحتوى. لهذا السبب تتجه العديد من الشركات إلى الذكاء الاصطناعي. يمكن أن يختصر عليهم الوقت لإنشاء المزيد من المحتوى الاحترافي.
يتمّ ذلك من خلال:
الحصول على أداة ذكاء اصطناعي AI لكتابة المحتوى وبرمجتها حسب الطلب.
أو يمكن استخدام المواقع التي تولد المحتوى باستخدام GPT-3: حيث تطلب من المستخدم إدخال نص يصل إلى أربع كلمات، يقوم النظام بتحليل اللغة ويستخدم متنبئًا النص لإنشاء المخرجات الأكثر احتمالية.
4 مواقع تقدم خدمة كتابة المحتوى باستخدام الذكاء الاصطناعي
إليك أهمّ المواقع التي تساعدك على إنشاء محتوى احترافي مخصصّ، ومناسب لأغلب القطاعات والتخصصات:
rytr
يساعدك على إنشاء محتوى عالي الجودة، من المدونات إلى رسائل البريد الإلكتروني إلى نُسخ الإعلانات.
في بضع ثوانٍ فقط، و بجزء بسيط من التكلفة!
مدعوم بأحدث لغة AI لإنشاء محتوى فريد وأصلي لأي مجال تقريبًا.
كل ما هو عليك اختيار حالة استخدام، وإدخال بعض السياق.
يمكنك الحصول على باقة غير محدودة مقابل 25 دولار فقط .
تدعم عدد كبير من انواع المحتوى واللغات وسعرها يعد رخيصاَ مقابل انها غير محددة .
رابط الموقع من هنا
katteb
أداة ذكاء اصطناعي قوية لتوليد المحتوى يمكنها مساعدتك في تجميع أفكارك والإبداع وإنتاج محتوى مخصص من الدرجة الأولى لجمهورك.يمكنك الاستفادة من الباقة الأساسية التي تكلّف فقط $4.99 شهريًا وتقدّم:
30 ألف كلمة.
مواضيع المدونة
مقدمة المدونة
مخطط المدونة
إعادة كتابة الاختلافات المتعددة
أعد كتابة المقالات كاملة
إطار PAS
وصف المنتجات
أفكار ترويجية
الفيسبوك ، إعلانات جوجل
التدقيق اللغوي
يدعم أكثر من 60 لغة
عدد مستخدمون غير محدود.قم بزيارة الموقع للتعرف على الباقات الأشمل.
shortlyaiمصمَّم بحيث يمكنك استخدامه لأي كتابة تحتاجها تقريبًا. يزوّدك بمساحة كبيرة فارغة للكتابة على اليمين و شريط جانبي به أدوات على اليسار.يقدّم لك خطة سنوية 65$ شهريًا، يمكنك إلغاءها متى رغبت بذلك. مع الحصول على شهرين مجانين. قم بزيارة الموقع من هنا للتعرف على المزيد من المزايا المقدّمة.
betterwriter.aiيمكنك تجربة فترته المجانية التي تقدّم 7 أيام دخول مجاني، توليد ومعالجة ما يصل إلى 10000 كلمة وتشمل جميع الميزات: نص مفصل، نص كامل، تلخيص النص، مخطط مدونة، أفكار موضوعات مدونة.أهمّ خدماته التي يمكنها الاستفادة منها بزمن قياسي:
كتابة مدونات.
إنشاء مخططات تفصيلية للمدونة، وفقرات مقدمة، وفقرات ختامية.
إنشاء أوصاف منتجات Amazon لـ 50 عنصرًا مختلفًات
إنشاء 50 نوعًا مختلفًا من إعلانات جوجل.
كتابة مقالات.
رابط الموقع من هنا
ما هو GPT-3 AI؟
تقنية مطوّرة حديثًا أحدثت ثورة في عالم الذكاء الاصطناعي (AI)، ببساطة يعتبرGPT-3 ذكاء اصطناعي أفضل أكثر من أي شيء سبقه في إنشاء محتوى فيه بنية لغوية، لغة بشرية أو لغة آلية. يرمز GPT-3 إلى Generative Pre-trained Transformer3 وهو الإصدار الثالث من الأداة الذي سيتم إصداره. باختصار، إنّه ينشئ نصًّا باستخدام خوارزميات مدربّة مسبقًا، لقد تمّ بالفعل تزويدهم بجميع البيانات التي يحتاجون إليها لتنفيذ مهمتهم. على وجه التحديد، تمّ تزويدهم بحوالي 570 غيغابايت من المعلومات النصية التي تمّ جمعها عن طريق crawling الزحف إلى الإنترنت (مجموعة بيانات متاحة للجمهور تُعرف باسم CommonCrawl) جنبًا إلى جنب مع النصوص الأخرى المحددة بواسطة OpenAI، بما في ذلك نص Wikipedia
هل يمكن GPT-3 كتابة أكواد برمجية؟
لا يقتصر GPT-3 على التلخيصات النصيّة البشرية ولكن يمكنه أيضًا إنشاء تلخيصات نصية تلقائيًا وحتى أكواد البرمجة programming code.
ما الذي يمكن استخدام GPT-3 له؟
من الممكن الحصول على أداة ذكاء اصطناعي AI لكتابة المحتوى وبرمجتها حسب الطلب. أو يمكن استخدام المواقع التي تولد المحتوى باستخدام GPT-3: حيث تطلب من المستخدم إدخال نص يصل إلى أربع كلمات، يقوم النظام بتحليل اللغة ويستخدم متنبئًا النص لإنشاء المخرجات الأكثر احتمالية.
ما الذي يمكن أن يفعله GPT-3؟
يمكن تنفيذ العديد من مهام البرمجة اللغوية العصبية NLP بواسطة GPT-3 دون أي تدرج أو تحديثات للمعلمات أو الحاجة إلى ضبط دقيق Fine-Tuning: ترجمة اللغة Language Translation تصنيف النص Text Classification استخراج المشاعر Sentiment Extraction فهم القراءة Reading Comprehension التعرف على الكيان المحدد Named Entity Recognition أنظمة الإجابة على الأسئلة Question Answer Systems إنشاء المقالات الإخبارية News Article Generation
من الذي أنشأ GPT-3؟
تمّ إنشاء GPT-3 بواسطة OpenAI، وهي شركة بحثية شارك في تأسيسها Elon Musk، وقد وُصفت بأنها أهمّ تقدّم مفيد في مجال الذكاء الاصطناعي على مدار سنوات. الكود نفسه غير متاح للجمهور حتى الآن، والوصول متاح فقط للمطورين المحددَين من خلال واجهة برمجة التطبيقات التي تحتفظ بها OpenAI.
الخلاصة
إنه لأمر رائع أن يكون لديك نظام معالجة اللغات الطبيعية NLP system والذي لا يتطلب كميات كبيرة من مجموعات البيانات الخاصة بالمهام المخصصة Custom-task-specific datasets وبنية النماذج المخصصة لحل مهام معالجة اللغات الطبيعية المحددة.
تظهر التجارب التي أجريت قوتها وإمكانياتها وتأثيرها على مستقبل تقدم البرمجة اللغوية العصبية، ونتائجها الإيجابية من ناحية توفير الوقت وزيادة الإنتاجية.
يعد GPT-3 مثالًا رائعًا على المدى الذي وصل إليه تطوير نموذج الذكاء الاصطناعي، فهو كما ذكرنا أكبر نموذج لغة تم تصميمه.
المراجع
GPT-3 paper: https://arxiv.org/pdf/2005.14165.pdf
Article on GPT-3 in action https://towardsdatascience.com/gpt-3-creative-potential-of-nlp-d5ccae16c1ab
GPT-3 Github page: https://github.com/openai/gpt-3
https://betterwriter.ai/
https://shortlyai.com/
https://katteb.com/
https://rytr.me/