SlideShare a Scribd company logo
1 of 36
Download to read offline
DATA SATURDAY
Sofia, Oct 07th
How do OpenAI GPT Models Work
Debunk Misconceptions and Tips for Developers
Ivelin Andreev
• Solution Architect @
• Microsoft Azure & AI MVP
• External Expert Eurostars-Eureka, Horizon Europe
• External Expert InnoFund Denmark, RIF Cyprus
• Business Interests
o Web Development, SOA, Integration
o IoT, Machine Learning
o Security & Performance Optimization
• Contact
ivelin.andreev@kongsbergdigital.com
www.linkedin.com/in/ivelin
www.slideshare.net/ivoandreev
SPEAKER BIO
Thanks to our Sponsors
Finally !! We prepared your ''LUNCH'' break menu
We do not
focus on this
We focus
on this
Topics we will Safely Ignore
• Convolutional Neural Networks (CNN)
• Compare GPT-4 to 3.5 and Bard
• Art of prompt engineering
• Roles, perspective training
• Having fun with ChatGPT
• Plugins, generating content
Takeaways
• What is ChatGPT doing and why does it work
o https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
• Data Usage Policy
o https://help.openai.com/en/articles/7039943-data-usage-for-consumer-services-faq
• Sparks of General Intelligence in GPT-4 (Microsoft Research, Apr 2023)
o https://youtu.be/qbIk7-JPB2c
• Prompts for Developers
o https://github.com/f/awesome-chatgpt-prompts
o (Course) ChatGPT Prompt Engineering for Developers (by OpenAI)
• OpenAI Cookbook (sample code for common tasks with the OpenAI API)
o https://github.com/openai/openai-cookbook/
You MUST read that one
And watch that one
Key Terms
TERM DEFINITION
Prompt The text you send to the service in the API call. This text is then input into the
model.
Completion The text Azure OpenAI outputs in response.
Token Type of text encoding (with ID) . Optimized for minimum number of tokens for
encoding Internet content. Tokens can be words or just chunks of characters.
• 1 token ~= 4 chars in English
• 100 tokens ~= 75 words
Few-shot prompting
One-shot prompting
Zero-shot prompting
Few examples show the model how to operate. Include several examples in the
prompt that demonstrate the expected answer format and content. (additional
training)
Embedding Information dense representation of the semantic meaning of a piece of text.
Format is vector of floating point numbers
Recency Bias The order input is presented matters
Good to Know!
• All prompts result in completions
o Model completes the text
o Model replies with what it thinks is most likely to follow
• Large-scale language and image models can potentially behave in ways
that are unfair, unreliable, or offensive and potentially causing harm
• OpenAI models are trained primarily in English.
o Bias may have been introduced. Other languages are likely to have worse performance.
• Stereotyping
o DALL-E generation of homeless and fatherless may render images of predominantly Afro-Americans
• Reliability
o Fabricate seemingly reasonable content that does not correspond to facts.
o Even training on trusted sources may fabricate content that misrepresent the sources
Transparency Note: https://learn.microsoft.com/en-us/legal/cognitive-services/openai/transparency-note
Know your GPT
• ChatGPT for Enterprises (URL)
o Launched 2023.08.23
• Pricing
o Negotiate case-by-case
• Highlights
• Copyright Lawsuits
o Under attack by copyright holders
o Does CoPilot training on GitHub
violate open-source licenses?
• Microsoft Promise (URL)
o Microsoft would cover the cost for any copyright
violations that may arise from use (applies to
M365 Copilot, Copilot and Bing Chat
Enterprise)
GPT – Advanced
Challenges
& Misconceptions
Top Questions (and Answers)
Q: Why does the same prompt receive different completions for different users
and when to expect the best answer? Who decides which is the best answer?
A: Models are designed to inject varying amounts of pseudo randomness into the response tokens
that are provided based on parameters. This is particularly influenced by parameters.
o Temperature [0,1] - Controls the “creativity” or randomness of the text generated. A higher value will make the output more divergent (fictional)
o Top_P [0,1] – chose from words having cumulative probability (higher = more diverse response, lower = more focused response)
o Prompt: [Prompt Text] Set the temperature to 0.1
Q: Is there a way to improve explainability?
A: For GPT-4 by itself, chain-of-thought prompting is a technique that can help increase the
likelihood of accurate answers, but it is not going to give you accurate source citations.
o group the 20 most common fruits in groups, cite the reason in the format BEHAVIOR("reason")
o …Output format: {"TOPIC_NAME": "", "HEADLINES": [], "REASONING": ""}
o Answer in 100 words or less. Use bullet lists wherever possible.
Q: Why does Bing chat work differently? It seems to be showing the sources.
A: Chunks of text from Bing search added behind the scenes to the GPT-4 Chat Completion call as
part of the messages array. GPT-4 is not guaranteed to limit to present these sources
Manage the Conversation
• Context
o No memory, all information shall be present in the conversation
o Once the token count is reached the oldest messages will be removed (Quality degrades )
o Context = prompts + input + output
• Think in Tokens
o Common multisyllable words are a single token (i.e. dataset)
o Less common words or dates are broken into tokens (i.e. tuning, 2023/10/18)
o Tokenizer apps: https://tokenization.azurewebsites.net/ or https://platform.openai.com/tokenizer
• Newer model – higher limit. But still there is a context limit
o GPT-3.5 (4’096 tokens), GPT-4 (8’192 tokens, 10 pages), GPT-4-32 (32’768 tokens, 40 pages)
o Option: limit the conversation duration to the max token length or a certain number of turns
o Option: summarize conversation until now and feed the summary as a prompt (details are lost)
Own data in GPT with AZ Search
• Own data for responses in ChatGPT (preview 2023.06.19)
o https://github.com/pablomarin/GPT-Azure-Search-Engine/blob/main/03-Quering-AOpenAI.ipynb
o https://github.com/Azure-Samples/azure-search-openai-demo/
o Inject own data using prompts (NO fine tuning and retraining)
• The Challenges
o Context limit is 4K (v3.5), 32K(v.4) tokens in a prompt
o Passing GBs of data as prompt is not possible
• Approach
o Keep all the data in an external knowledge base (KB)
o Retrieve fast from the KB (AZ Cognitive Search)
• Flow
o Determines what data to retrieve from data source (Cognitive Search) based on the user input
o Data augmented and appended to the prompt to the OpenAI model
o Resulting input processed like any other prompt by the model
DEMO
OpenAI on YOUR data
With AZ Cognitive Search
Function Calling
• Available in Azure OpenAI (2023.07.20)
o gpt-35-turbo and gpt-4 can produce structured JSON outputs based on functions
o new API parameters in our /v1/chat/completions endpoint (functions, function_call)
Note: Completions API does not call the function; instead, the model generates JSON that you can use
• Purpose
o Solves inability of models to address online data
o Empowers developers to easily integrate external
tools and APIs
o Model recognizes situations where function call is
necessary and creates structured JSON output
• Retrieve from data sources (search indexes,
databases, APIs)
• Actions (write data to a database, call
integration APIs, send notifications)
Function Calling Tips
• Default Auto
o When functions are provided, by default the function_call will be set to "auto"
• Validate function calls
o verify the function calls generated by the model. – parameters, intended action
• Data from Trusted/Verified Sources
o Untrusted data in a function output could instruct model to write function calls in a not intended way
• Least Privilege Principle
o The minimum access necessary for the function to perform its job
o I.e. query a database – use R/O access to the database
• Consider Real-World Impact
o Real-world impact of function - executing code, updating databases, or sending notifications
• User Confirmation
o A step where the user confirms the action before it's executed
Any sufficiently advanced technology is
indistinguishable from magic
~ Sir Arthur Clarke ~
How do GPT models work?
Surprising Findings
• Raw Training
o Training from large amount of unstructured text (can be automated) 300 bln words, 570GB text data
o 1 new token requires retraining of all weights
• Refinement
o Prompt and inspect for deviation in sense (reinforcement learning from human feedback)
• GPT is NN able to capture the complexity of human-generated language
o 175bln+ neuron connections with weights (GPT-3), 1tln+ parameters (GPT-4)
o GPT has no explicit knowledge of grammar rules
o Yet somehow “implicitly discovered” regularities in language (the hidden laws behind language).
• Grammatical rules
• Meaning and logic (semantics)
Finding: there’s actually a lot more structure and simplicity to meaningful human language than we ever knew
Embeddings as inner representation
• LLM does reasonable continuation
o Generative model complete sentence by predicting next words
o Repeatedly apply the model (Predict the next token, and next, and next)
o Good prediction requires context of n-grams (words back)
• Highest ranked token - is that what it uses?
o No, because then all responses would be too similar.
o Actually, we have different results with the same prompt
• The Temperature Parameter
o How often lower ranked words will be used in the generation
o Temperature=0.8 empirically determined (no science here)
• Embeddings
o Numerical representation of a piece of information (text, doc, img)
o Group tokens in meaning spaces (by distance)
Embeddings Observations
• Statement
o Encoded to embedding (Vector representation)
• Semantic similarity
o Defined as vector distance
o Measured as cosine of angle b/w vectors
• Measure topical more than logical similarity
o “The sky is blue” is very close to “The sky is not blue”
• Punctuation affects embedding
o “THE. SKY. IS. BLUE!” is not that close to “The sky is blue”
• In-language is stronger than across-language
o “El cielo as azul” is closer to “El cielo es rojo” than to “The sky is blue”
Using the Embeddings
• Embedding Vectors (EVs)
o Smaller embeddings = efficient models
o GPT3 text-embedding-ada-002 has 1’536 dimensions and 99.8% performance of davinci-001 (12’888 length)
• Attention is important
o Fully connected NN at such volume are overkill
o Continue the sentence in a reasonable way
• Prediction Stages
o Stage 1: Input of n tokens converted to EVs (i.e. 12’888 floats each)
o Stage 2: Get the sequence of positions of tokens into EV
o Stage 3: Combine EVs from Stage 1 and Stage 2 into EV
Stage 1 Stage 2 Stage 3
Context Encoding - Surprising Scientific Discovery
• Transformer NN Concept
o Proposed in 2017 by Google Brain team
o Feed Forward architecture (no recurrency, more efficient)
• Transformer Architecture Overview
o Input: encoder layer creates representation of the past as EV
o Attention blocks: generate contextual representation of the input
• Attention block has own pattern of attention
• Multiple attention heads process the EV in parallel
• Outputs concatenated and transformed (reweighted EV)
o Output: Classifier generates probability distribution for next token
• Attention blocks
o 96 blocks x 96 heads in GPT-3, 128 x 128 in GPT-4
o Image: 64x64 moving average of weights from attention block on EV
• The Magic: Shows neural net encoding of human language
.
LLMs are powerful though
consistently struggle with some
types of reasoning problems
General Intelligence
• Sparks of Artificial General Intelligence in GPT4 (full paper)
o Text-only model
o Capable to generate SVG images and videos from code
• Intelligence involve abilities (1997):
Reason
Plan
Solve Problems
Think abstractly
Understand complex concepts
Learn quickly from experience
• Conclusions
o Trained to predict next token/word
o But it does much more
o Some intelligence was present, not just
memorization
o New GPT-4 version is different and
dumber for security reasons
o 100% job interviews better than human
o Intelligent enough to use functions
o Uses tools
GPT Emerging Abilities
• LLM meant for Generation and Completion
o Not Math query solving specifically designed
o Sees numbers as tokens, no understanding of Math concepts
o Not reliable solving complex problems (Probabilistic approach)
Reasoning appears naturally is sufficiently large (10B+) LLM
Performance is near-random until a certain critical threshold is
reached, after which performance increases to substantially
• Emergent Abilities of Large Language Models (p.28)
• Chain of thought prompting elicits reasoning in LLM (p.8)
Emergence - quantitative changes in a system result in qualitative changes in behaviour
Chain of Thought Prompting (CoTP)
• What is CoTP
o Series of intermediate reasoning steps that guide the model
o Improves model abilities for complex reasoning
• LLMs are capable few-shot learners
• Translate, Classify, Summarize
• Few-shot manner to output CoTP
• How to do CoTP
o Ask similar question
o Show step-by-step
o Ask the real question
CoTP - Important Observations
• CoTP vs Standard Prompting improvement is robust
o Can potentially be applied to any task which humans use same technique
• Improvement achieved with very few samples (minimal data)
• Requires prompt engineering
o Background in ML is no required to prepare CoTP
o Order of examples matters (i.e. from 53% to 93%)
o Accuracy not improved by all CoTP authors
• Limitations
o 8 = the magical number
o Same CoTP affect models differently
• LaMDA, GPT-3, PaLM
o Gains not transferred across models
ChatGPT &
• As of 2023.03.23 ChatGPT
o Requires: ChatGPT-4 to use plugins
o Wolfram Alpha query behind the scenes
o Parse plugin response
• ChatGPT Plugins
o One of the most desired features
o Waiting list for new users
o Extend the potential use cases
o Provide access to information not included in training
• Recent
• Too personal
• Too specific
ChatGPT Can Now Hear, See and Speak
• New voice and image capabilities (Sept 25, 2023)
o Voice conversation
o Show GPT images to describe your thoughts
o Generate images (DALL-E 3 plugin for GPT, from Oct 2023)
• How is this important?
o The level of interaction brings new opportunities
• Use Cases
o Generate code from image and iteratively improve
o Analyze Math graphs
o Request repair instructions
• Limitations
o GPT-4V(Vision) requires ChatGPT-Plus (20$/month) or Enterprise
o Optimized for English
o Limited capability with highly technical images
o Safety features of DALL-E3 prevent it from serving explicit and violence, as well as public figures
Tips for Developers
Features Working behind the Scenes
• Moderation API endpoint (preview May 2023)
( prompt: Tell me about content safety moderator in ChatGPT)
o Analyzes response for potential issues using rule-based systems and ML
o Designed to help detect and filter out content that may violate OpenAI's policies
o Enabled by default to OpenAI API
• Custom Content Filters
(prompt: what are custom content filters in ChatGPT)
o Allow users to add own content moderation rules on top of the default
o Rules can be used to filter out specific topics and ensure compliance.
o Additional layer of moderation to comply with standards
• Multi-modal Capabilities
o Separate Vision encoder
o Text and Vision encoders interact via cross-attention mechanism
Toggle low severity level filters (only)
Florence - Large Foundational Model for Vision
• Trained on large scale (text–image
pairs) dataset (x10^9)
o Weak (almost none) supervision
o 893M model parameters
o 100 + 4’000GB = 10 days
• Part of Cognitive Services for Vision
• Highlights
o Wide range of objects and scenes
o Near real-time
o Zero-shot (no extra training)
o Large impact for business apps
o Deployed in cloud
o Available in Vision Studio Portal
https://arxiv.org/pdf/2111.11432.pdf
• Typical Tasks
o Supports millions of categories
o Transformer-based (1’000 dim EV)
• Dense Captions
o Up to 10 sections
o Detect objects
o Describe actions
• API, S3 instance required
o Still in Preview (free)
Demo Code:
https://github.com/retkowsky/Azure-
Computer-Vision-in-a-day-workshop
Prompt Use Cases for Developers
• Automated testing - Write a test script for
[language] code that covers [functional / non-
functional] testing: [snippet].
• Code refactoring - Optimize the following
[language] code for lower memory usage: [snippet]
• Algorithm development - Design a
heuristic algorithm to solve the following problem:
[problem description].
• Code translation - Rewrite [source language]
data structure implementation in [target language]
• Technical writing - Write a tutorial on how to
integrate [library] with [programming language]
• Requirement analysis - Analyze the given
project requirements and propose a detailed project
plan with milestones and deliverables
• Code generation (Generate/Write/Create a
function in [JS/C#/Java/Py] that does …)
• Bug detection (Find bug in this [Code])
• Code review - Analyze the given [language]
code for code smells and suggest improvements:
[snippet].
• API documentation generation -
Produce a tutorial for using the following [language]
API with example code: [code snippet].
• Query optimization - Suggest
improvements to the following database schema for
better query performance
• User interface design - Generate a UI
mockup for a [web/mobile] dashboard that visualizes
[data or metrics]
Search is Better than Fine-Tuning
• Unfamiliar topics?
o Recent events after September 2021
o Own and non-public documents; Past conversations
• GPT learns by:
o Model weights (fine tune model on training set)
o Model inputs (new knowledge, text)
• Fine tuning (How to for GPT-3.5 Turbo)
o Prohibitively expensive, requires difficult assemble dataset
• Parameters (Chat)
o temperature: controls the randomness of responses. >0.8 – creative; < 0.5 focused and deterministic.
o max_tokens: limits the length of the response to a specified number of tokens.
• Parameters (OpenAI API)
o top_p - balance creativity. Words with higher probability than top_p are included (i.e. 0.2 = deterministic)
o frequency_penalty - discourage the model from repeating same words
Upcoming Events
JS Talks
Nov 17-18, 2023 @Sofia Tech Park
Tickets (Eventbrite)
Submit Session (Sessionize)
Thanks to our Sponsors

More Related Content

What's hot

Microsoft + OpenAI: Recent Updates (Machine Learning 15minutes! Broadcast #74)
Microsoft + OpenAI: Recent Updates (Machine Learning 15minutes! Broadcast #74)Microsoft + OpenAI: Recent Updates (Machine Learning 15minutes! Broadcast #74)
Microsoft + OpenAI: Recent Updates (Machine Learning 15minutes! Broadcast #74)Naoki (Neo) SATO
 
Generative AI and law.pptx
Generative AI and law.pptxGenerative AI and law.pptx
Generative AI and law.pptxChris Marsden
 
Cavalry Ventures | Deep Dive: Generative AI
Cavalry Ventures | Deep Dive: Generative AICavalry Ventures | Deep Dive: Generative AI
Cavalry Ventures | Deep Dive: Generative AICavalry Ventures
 
How Azure helps to build better business processes and customer experiences w...
How Azure helps to build better business processes and customer experiences w...How Azure helps to build better business processes and customer experiences w...
How Azure helps to build better business processes and customer experiences w...Maxim Salnikov
 
Large Language Models Bootcamp
Large Language Models BootcampLarge Language Models Bootcamp
Large Language Models BootcampData Science Dojo
 
Generative Models and ChatGPT
Generative Models and ChatGPTGenerative Models and ChatGPT
Generative Models and ChatGPTLoic Merckel
 
generative-ai-fundamentals and Large language models
generative-ai-fundamentals and Large language modelsgenerative-ai-fundamentals and Large language models
generative-ai-fundamentals and Large language modelsAdventureWorld5
 
Let's talk about GPT: A crash course in Generative AI for researchers
Let's talk about GPT: A crash course in Generative AI for researchersLet's talk about GPT: A crash course in Generative AI for researchers
Let's talk about GPT: A crash course in Generative AI for researchersSteven Van Vaerenbergh
 
ChatGPT vs. GPT-3.pdf
ChatGPT vs. GPT-3.pdfChatGPT vs. GPT-3.pdf
ChatGPT vs. GPT-3.pdfAddepto
 
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!taozen
 
Best Practice on using Azure OpenAI Service
Best Practice on using Azure OpenAI ServiceBest Practice on using Azure OpenAI Service
Best Practice on using Azure OpenAI ServiceKumton Suttiraksiri
 
Unlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdfUnlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
 
The Future of AI is Generative not Discriminative 5/26/2021
The Future of AI is Generative not Discriminative 5/26/2021The Future of AI is Generative not Discriminative 5/26/2021
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1DianaGray10
 
Large Language Models - Chat AI.pdf
Large Language Models - Chat AI.pdfLarge Language Models - Chat AI.pdf
Large Language Models - Chat AI.pdfDavid Rostcheck
 
How Does Generative AI Actually Work? (a quick semi-technical introduction to...
How Does Generative AI Actually Work? (a quick semi-technical introduction to...How Does Generative AI Actually Work? (a quick semi-technical introduction to...
How Does Generative AI Actually Work? (a quick semi-technical introduction to...ssuser4edc93
 

What's hot (20)

Microsoft + OpenAI: Recent Updates (Machine Learning 15minutes! Broadcast #74)
Microsoft + OpenAI: Recent Updates (Machine Learning 15minutes! Broadcast #74)Microsoft + OpenAI: Recent Updates (Machine Learning 15minutes! Broadcast #74)
Microsoft + OpenAI: Recent Updates (Machine Learning 15minutes! Broadcast #74)
 
Generative AI and law.pptx
Generative AI and law.pptxGenerative AI and law.pptx
Generative AI and law.pptx
 
Cavalry Ventures | Deep Dive: Generative AI
Cavalry Ventures | Deep Dive: Generative AICavalry Ventures | Deep Dive: Generative AI
Cavalry Ventures | Deep Dive: Generative AI
 
How Azure helps to build better business processes and customer experiences w...
How Azure helps to build better business processes and customer experiences w...How Azure helps to build better business processes and customer experiences w...
How Azure helps to build better business processes and customer experiences w...
 
Large Language Models Bootcamp
Large Language Models BootcampLarge Language Models Bootcamp
Large Language Models Bootcamp
 
Generative Models and ChatGPT
Generative Models and ChatGPTGenerative Models and ChatGPT
Generative Models and ChatGPT
 
generative-ai-fundamentals and Large language models
generative-ai-fundamentals and Large language modelsgenerative-ai-fundamentals and Large language models
generative-ai-fundamentals and Large language models
 
Let's talk about GPT: A crash course in Generative AI for researchers
Let's talk about GPT: A crash course in Generative AI for researchersLet's talk about GPT: A crash course in Generative AI for researchers
Let's talk about GPT: A crash course in Generative AI for researchers
 
ChatGPT vs. GPT-3.pdf
ChatGPT vs. GPT-3.pdfChatGPT vs. GPT-3.pdf
ChatGPT vs. GPT-3.pdf
 
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
 
Best Practice on using Azure OpenAI Service
Best Practice on using Azure OpenAI ServiceBest Practice on using Azure OpenAI Service
Best Practice on using Azure OpenAI Service
 
ChatGPT ChatBot
ChatGPT ChatBotChatGPT ChatBot
ChatGPT ChatBot
 
Unlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdfUnlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdf
 
The Future of AI is Generative not Discriminative 5/26/2021
The Future of AI is Generative not Discriminative 5/26/2021The Future of AI is Generative not Discriminative 5/26/2021
The Future of AI is Generative not Discriminative 5/26/2021
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
 
Large Language Models - Chat AI.pdf
Large Language Models - Chat AI.pdfLarge Language Models - Chat AI.pdf
Large Language Models - Chat AI.pdf
 
How Does Generative AI Actually Work? (a quick semi-technical introduction to...
How Does Generative AI Actually Work? (a quick semi-technical introduction to...How Does Generative AI Actually Work? (a quick semi-technical introduction to...
How Does Generative AI Actually Work? (a quick semi-technical introduction to...
 
The Creative Ai storm
The Creative Ai stormThe Creative Ai storm
The Creative Ai storm
 
Journey of Generative AI
Journey of Generative AIJourney of Generative AI
Journey of Generative AI
 
introduction Azure OpenAI by Usama wahab khan
introduction  Azure OpenAI by Usama wahab khanintroduction  Azure OpenAI by Usama wahab khan
introduction Azure OpenAI by Usama wahab khan
 

Similar to Here are a few key things to understand about how GPT models work:- They are trained on vast amounts of text using a technique called transformer architecture, which looks at the context of words and their relationships. This allows the model to understand language.- The model represents words, phrases, and concepts as dense numerical vectors called embeddings. Items with similar meanings have embeddings close together in a high-dimensional space. - When prompted with text, the model generates a response by predicting the next most likely word based on its understanding of language patterns and the context embeddings. It does this incrementally one word at a time.- Parameters like temperature control how random vs predictable the responses are. Lower temperature yields more typical and

OpenAI GPT in Depth - Questions and Misconceptions
OpenAI GPT in Depth - Questions and MisconceptionsOpenAI GPT in Depth - Questions and Misconceptions
OpenAI GPT in Depth - Questions and MisconceptionsIvo Andreev
 
DMDS Winter 2015 Workshop 1 slides
DMDS Winter 2015 Workshop 1 slidesDMDS Winter 2015 Workshop 1 slides
DMDS Winter 2015 Workshop 1 slidesPaige Morgan
 
Generative AI in CSharp with Semantic Kernel.pptx
Generative AI in CSharp with Semantic Kernel.pptxGenerative AI in CSharp with Semantic Kernel.pptx
Generative AI in CSharp with Semantic Kernel.pptxAlon Fliess
 
Feb.2016 Demystifying Digital Humanities - Workshop 2
Feb.2016 Demystifying Digital Humanities - Workshop 2Feb.2016 Demystifying Digital Humanities - Workshop 2
Feb.2016 Demystifying Digital Humanities - Workshop 2Paige Morgan
 
Machine Learning in NLP
Machine Learning in NLPMachine Learning in NLP
Machine Learning in NLPVijay Ganti
 
Robotics, Search and AI with Solr, MyRobotLab, and Deeplearning4j
Robotics, Search and AI with Solr, MyRobotLab, and Deeplearning4jRobotics, Search and AI with Solr, MyRobotLab, and Deeplearning4j
Robotics, Search and AI with Solr, MyRobotLab, and Deeplearning4jKevin Watters
 
The Intersection of Robotics, Search and AI with Solr, MyRobotLab, and Deep L...
The Intersection of Robotics, Search and AI with Solr, MyRobotLab, and Deep L...The Intersection of Robotics, Search and AI with Solr, MyRobotLab, and Deep L...
The Intersection of Robotics, Search and AI with Solr, MyRobotLab, and Deep L...Lucidworks
 
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...Daniel Zivkovic
 
Getting started-php unit
Getting started-php unitGetting started-php unit
Getting started-php unitmfrost503
 
ChatGPT-and-Generative-AI-Landscape Working of generative ai search
ChatGPT-and-Generative-AI-Landscape Working of generative ai searchChatGPT-and-Generative-AI-Landscape Working of generative ai search
ChatGPT-and-Generative-AI-Landscape Working of generative ai searchrohitcse52
 
Operationalizing Data Science St. Louis Big Data IDEA
Operationalizing Data Science St. Louis Big Data IDEAOperationalizing Data Science St. Louis Big Data IDEA
Operationalizing Data Science St. Louis Big Data IDEAAdam Doyle
 
Top 10 Interview Questions for Coding Job.docx
Top 10 Interview Questions for Coding Job.docxTop 10 Interview Questions for Coding Job.docx
Top 10 Interview Questions for Coding Job.docxSurendra Gusain
 
Top 10 Interview Questions for Coding Job.docx
Top 10 Interview Questions for Coding Job.docxTop 10 Interview Questions for Coding Job.docx
Top 10 Interview Questions for Coding Job.docxSurendra Gusain
 
Introduction to Multimodal LLMs with LLaVA
Introduction to Multimodal LLMs with LLaVAIntroduction to Multimodal LLMs with LLaVA
Introduction to Multimodal LLMs with LLaVARobert McDermott
 
Introduction to Multimodal LLMs with LLaVA
Introduction to Multimodal LLMs with LLaVAIntroduction to Multimodal LLMs with LLaVA
Introduction to Multimodal LLMs with LLaVARobert McDermott
 
Breaking Through The Challenges of Scalable Deep Learning for Video Analytics
Breaking Through The Challenges of Scalable Deep Learning for Video AnalyticsBreaking Through The Challenges of Scalable Deep Learning for Video Analytics
Breaking Through The Challenges of Scalable Deep Learning for Video AnalyticsJason Anderson
 
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin & Leanne La...
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin &  Leanne La...OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin &  Leanne La...
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin & Leanne La...NETWAYS
 
Your Voice is My Passport
Your Voice is My PassportYour Voice is My Passport
Your Voice is My PassportPriyanka Aash
 

Similar to Here are a few key things to understand about how GPT models work:- They are trained on vast amounts of text using a technique called transformer architecture, which looks at the context of words and their relationships. This allows the model to understand language.- The model represents words, phrases, and concepts as dense numerical vectors called embeddings. Items with similar meanings have embeddings close together in a high-dimensional space. - When prompted with text, the model generates a response by predicting the next most likely word based on its understanding of language patterns and the context embeddings. It does this incrementally one word at a time.- Parameters like temperature control how random vs predictable the responses are. Lower temperature yields more typical and (20)

OpenAI GPT in Depth - Questions and Misconceptions
OpenAI GPT in Depth - Questions and MisconceptionsOpenAI GPT in Depth - Questions and Misconceptions
OpenAI GPT in Depth - Questions and Misconceptions
 
DMDS Winter 2015 Workshop 1 slides
DMDS Winter 2015 Workshop 1 slidesDMDS Winter 2015 Workshop 1 slides
DMDS Winter 2015 Workshop 1 slides
 
Generative AI in CSharp with Semantic Kernel.pptx
Generative AI in CSharp with Semantic Kernel.pptxGenerative AI in CSharp with Semantic Kernel.pptx
Generative AI in CSharp with Semantic Kernel.pptx
 
Feb.2016 Demystifying Digital Humanities - Workshop 2
Feb.2016 Demystifying Digital Humanities - Workshop 2Feb.2016 Demystifying Digital Humanities - Workshop 2
Feb.2016 Demystifying Digital Humanities - Workshop 2
 
Machine Learning in NLP
Machine Learning in NLPMachine Learning in NLP
Machine Learning in NLP
 
Robotics, Search and AI with Solr, MyRobotLab, and Deeplearning4j
Robotics, Search and AI with Solr, MyRobotLab, and Deeplearning4jRobotics, Search and AI with Solr, MyRobotLab, and Deeplearning4j
Robotics, Search and AI with Solr, MyRobotLab, and Deeplearning4j
 
The Intersection of Robotics, Search and AI with Solr, MyRobotLab, and Deep L...
The Intersection of Robotics, Search and AI with Solr, MyRobotLab, and Deep L...The Intersection of Robotics, Search and AI with Solr, MyRobotLab, and Deep L...
The Intersection of Robotics, Search and AI with Solr, MyRobotLab, and Deep L...
 
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...
 
Getting started-php unit
Getting started-php unitGetting started-php unit
Getting started-php unit
 
ChatGPT-and-Generative-AI-Landscape Working of generative ai search
ChatGPT-and-Generative-AI-Landscape Working of generative ai searchChatGPT-and-Generative-AI-Landscape Working of generative ai search
ChatGPT-and-Generative-AI-Landscape Working of generative ai search
 
Operationalizing Data Science St. Louis Big Data IDEA
Operationalizing Data Science St. Louis Big Data IDEAOperationalizing Data Science St. Louis Big Data IDEA
Operationalizing Data Science St. Louis Big Data IDEA
 
Top 10 Interview Questions for Coding Job.docx
Top 10 Interview Questions for Coding Job.docxTop 10 Interview Questions for Coding Job.docx
Top 10 Interview Questions for Coding Job.docx
 
Top 10 Interview Questions for Coding Job.docx
Top 10 Interview Questions for Coding Job.docxTop 10 Interview Questions for Coding Job.docx
Top 10 Interview Questions for Coding Job.docx
 
Deep learning for NLP
Deep learning for NLPDeep learning for NLP
Deep learning for NLP
 
Introduction to Multimodal LLMs with LLaVA
Introduction to Multimodal LLMs with LLaVAIntroduction to Multimodal LLMs with LLaVA
Introduction to Multimodal LLMs with LLaVA
 
Introduction to Multimodal LLMs with LLaVA
Introduction to Multimodal LLMs with LLaVAIntroduction to Multimodal LLMs with LLaVA
Introduction to Multimodal LLMs with LLaVA
 
Breaking Through The Challenges of Scalable Deep Learning for Video Analytics
Breaking Through The Challenges of Scalable Deep Learning for Video AnalyticsBreaking Through The Challenges of Scalable Deep Learning for Video Analytics
Breaking Through The Challenges of Scalable Deep Learning for Video Analytics
 
Deep Domain
Deep DomainDeep Domain
Deep Domain
 
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin & Leanne La...
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin &  Leanne La...OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin &  Leanne La...
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin & Leanne La...
 
Your Voice is My Passport
Your Voice is My PassportYour Voice is My Passport
Your Voice is My Passport
 

More from Ivo Andreev

Cybersecurity and Generative AI - for Good and Bad vol.2
Cybersecurity and Generative AI - for Good and Bad vol.2Cybersecurity and Generative AI - for Good and Bad vol.2
Cybersecurity and Generative AI - for Good and Bad vol.2Ivo Andreev
 
Architecting AI Solutions in Azure for Business
Architecting AI Solutions in Azure for BusinessArchitecting AI Solutions in Azure for Business
Architecting AI Solutions in Azure for BusinessIvo Andreev
 
Cybersecurity Challenges with Generative AI - for Good and Bad
Cybersecurity Challenges with Generative AI - for Good and BadCybersecurity Challenges with Generative AI - for Good and Bad
Cybersecurity Challenges with Generative AI - for Good and BadIvo Andreev
 
JS-Experts - Cybersecurity for Generative AI
JS-Experts - Cybersecurity for Generative AIJS-Experts - Cybersecurity for Generative AI
JS-Experts - Cybersecurity for Generative AIIvo Andreev
 
Cutting Edge Computer Vision for Everyone
Cutting Edge Computer Vision for EveryoneCutting Edge Computer Vision for Everyone
Cutting Edge Computer Vision for EveryoneIvo Andreev
 
Collecting and Analysing Spaceborn Data
Collecting and Analysing Spaceborn DataCollecting and Analysing Spaceborn Data
Collecting and Analysing Spaceborn DataIvo Andreev
 
Collecting and Analysing Satellite Data with Azure Orbital
Collecting and Analysing Satellite Data with Azure OrbitalCollecting and Analysing Satellite Data with Azure Orbital
Collecting and Analysing Satellite Data with Azure OrbitalIvo Andreev
 
Language Studio and Custom Models
Language Studio and Custom ModelsLanguage Studio and Custom Models
Language Studio and Custom ModelsIvo Andreev
 
CosmosDB for IoT Scenarios
CosmosDB for IoT ScenariosCosmosDB for IoT Scenarios
CosmosDB for IoT ScenariosIvo Andreev
 
Forecasting time series powerful and simple
Forecasting time series powerful and simpleForecasting time series powerful and simple
Forecasting time series powerful and simpleIvo Andreev
 
Constrained Optimization with Genetic Algorithms and Project Bonsai
Constrained Optimization with Genetic Algorithms and Project BonsaiConstrained Optimization with Genetic Algorithms and Project Bonsai
Constrained Optimization with Genetic Algorithms and Project BonsaiIvo Andreev
 
Azure security guidelines for developers
Azure security guidelines for developers Azure security guidelines for developers
Azure security guidelines for developers Ivo Andreev
 
Autonomous Machines with Project Bonsai
Autonomous Machines with Project BonsaiAutonomous Machines with Project Bonsai
Autonomous Machines with Project BonsaiIvo Andreev
 
Global azure virtual 2021 - Azure Lighthouse
Global azure virtual 2021 - Azure LighthouseGlobal azure virtual 2021 - Azure Lighthouse
Global azure virtual 2021 - Azure LighthouseIvo Andreev
 
Flux QL - Nexgen Management of Time Series Inspired by JS
Flux QL - Nexgen Management of Time Series Inspired by JSFlux QL - Nexgen Management of Time Series Inspired by JS
Flux QL - Nexgen Management of Time Series Inspired by JSIvo Andreev
 
Azure architecture design patterns - proven solutions to common challenges
Azure architecture design patterns - proven solutions to common challengesAzure architecture design patterns - proven solutions to common challenges
Azure architecture design patterns - proven solutions to common challengesIvo Andreev
 
Industrial IoT on Azure
Industrial IoT on AzureIndustrial IoT on Azure
Industrial IoT on AzureIvo Andreev
 
The Power of Auto ML and How Does it Work
The Power of Auto ML and How Does it WorkThe Power of Auto ML and How Does it Work
The Power of Auto ML and How Does it WorkIvo Andreev
 
Flying a Drone with JavaScript and Computer Vision
Flying a Drone with JavaScript and Computer VisionFlying a Drone with JavaScript and Computer Vision
Flying a Drone with JavaScript and Computer VisionIvo Andreev
 
ML with Power BI for Business and Pros
ML with Power BI for Business and ProsML with Power BI for Business and Pros
ML with Power BI for Business and ProsIvo Andreev
 

More from Ivo Andreev (20)

Cybersecurity and Generative AI - for Good and Bad vol.2
Cybersecurity and Generative AI - for Good and Bad vol.2Cybersecurity and Generative AI - for Good and Bad vol.2
Cybersecurity and Generative AI - for Good and Bad vol.2
 
Architecting AI Solutions in Azure for Business
Architecting AI Solutions in Azure for BusinessArchitecting AI Solutions in Azure for Business
Architecting AI Solutions in Azure for Business
 
Cybersecurity Challenges with Generative AI - for Good and Bad
Cybersecurity Challenges with Generative AI - for Good and BadCybersecurity Challenges with Generative AI - for Good and Bad
Cybersecurity Challenges with Generative AI - for Good and Bad
 
JS-Experts - Cybersecurity for Generative AI
JS-Experts - Cybersecurity for Generative AIJS-Experts - Cybersecurity for Generative AI
JS-Experts - Cybersecurity for Generative AI
 
Cutting Edge Computer Vision for Everyone
Cutting Edge Computer Vision for EveryoneCutting Edge Computer Vision for Everyone
Cutting Edge Computer Vision for Everyone
 
Collecting and Analysing Spaceborn Data
Collecting and Analysing Spaceborn DataCollecting and Analysing Spaceborn Data
Collecting and Analysing Spaceborn Data
 
Collecting and Analysing Satellite Data with Azure Orbital
Collecting and Analysing Satellite Data with Azure OrbitalCollecting and Analysing Satellite Data with Azure Orbital
Collecting and Analysing Satellite Data with Azure Orbital
 
Language Studio and Custom Models
Language Studio and Custom ModelsLanguage Studio and Custom Models
Language Studio and Custom Models
 
CosmosDB for IoT Scenarios
CosmosDB for IoT ScenariosCosmosDB for IoT Scenarios
CosmosDB for IoT Scenarios
 
Forecasting time series powerful and simple
Forecasting time series powerful and simpleForecasting time series powerful and simple
Forecasting time series powerful and simple
 
Constrained Optimization with Genetic Algorithms and Project Bonsai
Constrained Optimization with Genetic Algorithms and Project BonsaiConstrained Optimization with Genetic Algorithms and Project Bonsai
Constrained Optimization with Genetic Algorithms and Project Bonsai
 
Azure security guidelines for developers
Azure security guidelines for developers Azure security guidelines for developers
Azure security guidelines for developers
 
Autonomous Machines with Project Bonsai
Autonomous Machines with Project BonsaiAutonomous Machines with Project Bonsai
Autonomous Machines with Project Bonsai
 
Global azure virtual 2021 - Azure Lighthouse
Global azure virtual 2021 - Azure LighthouseGlobal azure virtual 2021 - Azure Lighthouse
Global azure virtual 2021 - Azure Lighthouse
 
Flux QL - Nexgen Management of Time Series Inspired by JS
Flux QL - Nexgen Management of Time Series Inspired by JSFlux QL - Nexgen Management of Time Series Inspired by JS
Flux QL - Nexgen Management of Time Series Inspired by JS
 
Azure architecture design patterns - proven solutions to common challenges
Azure architecture design patterns - proven solutions to common challengesAzure architecture design patterns - proven solutions to common challenges
Azure architecture design patterns - proven solutions to common challenges
 
Industrial IoT on Azure
Industrial IoT on AzureIndustrial IoT on Azure
Industrial IoT on Azure
 
The Power of Auto ML and How Does it Work
The Power of Auto ML and How Does it WorkThe Power of Auto ML and How Does it Work
The Power of Auto ML and How Does it Work
 
Flying a Drone with JavaScript and Computer Vision
Flying a Drone with JavaScript and Computer VisionFlying a Drone with JavaScript and Computer Vision
Flying a Drone with JavaScript and Computer Vision
 
ML with Power BI for Business and Pros
ML with Power BI for Business and ProsML with Power BI for Business and Pros
ML with Power BI for Business and Pros
 

Recently uploaded

BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEOrtus Solutions, Corp
 
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataAdobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataBradBedford3
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantAxelRicardoTrocheRiq
 
Asset Management Software - Infographic
Asset Management Software - InfographicAsset Management Software - Infographic
Asset Management Software - InfographicHr365.us smith
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Velvetech LLC
 
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfGOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfAlina Yurenko
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024StefanoLambiase
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEEVICTOR MAESTRE RAMIREZ
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Andreas Granig
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideChristina Lin
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...gurkirankumar98700
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)OPEN KNOWLEDGE GmbH
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanyChristoph Pohl
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWave PLM
 
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样umasea
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...OnePlan Solutions
 
What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....kzayra69
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...Christina Lin
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesPhilip Schwarz
 

Recently uploaded (20)

BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
 
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataAdobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service Consultant
 
Asset Management Software - Infographic
Asset Management Software - InfographicAsset Management Software - Infographic
Asset Management Software - Infographic
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...
 
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfGOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEE
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need It
 
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...
 
What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a series
 

Here are a few key things to understand about how GPT models work:- They are trained on vast amounts of text using a technique called transformer architecture, which looks at the context of words and their relationships. This allows the model to understand language.- The model represents words, phrases, and concepts as dense numerical vectors called embeddings. Items with similar meanings have embeddings close together in a high-dimensional space. - When prompted with text, the model generates a response by predicting the next most likely word based on its understanding of language patterns and the context embeddings. It does this incrementally one word at a time.- Parameters like temperature control how random vs predictable the responses are. Lower temperature yields more typical and

  • 1. DATA SATURDAY Sofia, Oct 07th How do OpenAI GPT Models Work Debunk Misconceptions and Tips for Developers Ivelin Andreev
  • 2. • Solution Architect @ • Microsoft Azure & AI MVP • External Expert Eurostars-Eureka, Horizon Europe • External Expert InnoFund Denmark, RIF Cyprus • Business Interests o Web Development, SOA, Integration o IoT, Machine Learning o Security & Performance Optimization • Contact ivelin.andreev@kongsbergdigital.com www.linkedin.com/in/ivelin www.slideshare.net/ivoandreev SPEAKER BIO
  • 3. Thanks to our Sponsors
  • 4. Finally !! We prepared your ''LUNCH'' break menu
  • 5. We do not focus on this We focus on this Topics we will Safely Ignore • Convolutional Neural Networks (CNN) • Compare GPT-4 to 3.5 and Bard • Art of prompt engineering • Roles, perspective training • Having fun with ChatGPT • Plugins, generating content
  • 6. Takeaways • What is ChatGPT doing and why does it work o https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ • Data Usage Policy o https://help.openai.com/en/articles/7039943-data-usage-for-consumer-services-faq • Sparks of General Intelligence in GPT-4 (Microsoft Research, Apr 2023) o https://youtu.be/qbIk7-JPB2c • Prompts for Developers o https://github.com/f/awesome-chatgpt-prompts o (Course) ChatGPT Prompt Engineering for Developers (by OpenAI) • OpenAI Cookbook (sample code for common tasks with the OpenAI API) o https://github.com/openai/openai-cookbook/ You MUST read that one And watch that one
  • 7. Key Terms TERM DEFINITION Prompt The text you send to the service in the API call. This text is then input into the model. Completion The text Azure OpenAI outputs in response. Token Type of text encoding (with ID) . Optimized for minimum number of tokens for encoding Internet content. Tokens can be words or just chunks of characters. • 1 token ~= 4 chars in English • 100 tokens ~= 75 words Few-shot prompting One-shot prompting Zero-shot prompting Few examples show the model how to operate. Include several examples in the prompt that demonstrate the expected answer format and content. (additional training) Embedding Information dense representation of the semantic meaning of a piece of text. Format is vector of floating point numbers Recency Bias The order input is presented matters
  • 8. Good to Know! • All prompts result in completions o Model completes the text o Model replies with what it thinks is most likely to follow • Large-scale language and image models can potentially behave in ways that are unfair, unreliable, or offensive and potentially causing harm • OpenAI models are trained primarily in English. o Bias may have been introduced. Other languages are likely to have worse performance. • Stereotyping o DALL-E generation of homeless and fatherless may render images of predominantly Afro-Americans • Reliability o Fabricate seemingly reasonable content that does not correspond to facts. o Even training on trusted sources may fabricate content that misrepresent the sources Transparency Note: https://learn.microsoft.com/en-us/legal/cognitive-services/openai/transparency-note
  • 9. Know your GPT • ChatGPT for Enterprises (URL) o Launched 2023.08.23 • Pricing o Negotiate case-by-case • Highlights • Copyright Lawsuits o Under attack by copyright holders o Does CoPilot training on GitHub violate open-source licenses? • Microsoft Promise (URL) o Microsoft would cover the cost for any copyright violations that may arise from use (applies to M365 Copilot, Copilot and Bing Chat Enterprise)
  • 11. Top Questions (and Answers) Q: Why does the same prompt receive different completions for different users and when to expect the best answer? Who decides which is the best answer? A: Models are designed to inject varying amounts of pseudo randomness into the response tokens that are provided based on parameters. This is particularly influenced by parameters. o Temperature [0,1] - Controls the “creativity” or randomness of the text generated. A higher value will make the output more divergent (fictional) o Top_P [0,1] – chose from words having cumulative probability (higher = more diverse response, lower = more focused response) o Prompt: [Prompt Text] Set the temperature to 0.1 Q: Is there a way to improve explainability? A: For GPT-4 by itself, chain-of-thought prompting is a technique that can help increase the likelihood of accurate answers, but it is not going to give you accurate source citations. o group the 20 most common fruits in groups, cite the reason in the format BEHAVIOR("reason") o …Output format: {"TOPIC_NAME": "", "HEADLINES": [], "REASONING": ""} o Answer in 100 words or less. Use bullet lists wherever possible. Q: Why does Bing chat work differently? It seems to be showing the sources. A: Chunks of text from Bing search added behind the scenes to the GPT-4 Chat Completion call as part of the messages array. GPT-4 is not guaranteed to limit to present these sources
  • 12. Manage the Conversation • Context o No memory, all information shall be present in the conversation o Once the token count is reached the oldest messages will be removed (Quality degrades ) o Context = prompts + input + output • Think in Tokens o Common multisyllable words are a single token (i.e. dataset) o Less common words or dates are broken into tokens (i.e. tuning, 2023/10/18) o Tokenizer apps: https://tokenization.azurewebsites.net/ or https://platform.openai.com/tokenizer • Newer model – higher limit. But still there is a context limit o GPT-3.5 (4’096 tokens), GPT-4 (8’192 tokens, 10 pages), GPT-4-32 (32’768 tokens, 40 pages) o Option: limit the conversation duration to the max token length or a certain number of turns o Option: summarize conversation until now and feed the summary as a prompt (details are lost)
  • 13. Own data in GPT with AZ Search • Own data for responses in ChatGPT (preview 2023.06.19) o https://github.com/pablomarin/GPT-Azure-Search-Engine/blob/main/03-Quering-AOpenAI.ipynb o https://github.com/Azure-Samples/azure-search-openai-demo/ o Inject own data using prompts (NO fine tuning and retraining) • The Challenges o Context limit is 4K (v3.5), 32K(v.4) tokens in a prompt o Passing GBs of data as prompt is not possible • Approach o Keep all the data in an external knowledge base (KB) o Retrieve fast from the KB (AZ Cognitive Search) • Flow o Determines what data to retrieve from data source (Cognitive Search) based on the user input o Data augmented and appended to the prompt to the OpenAI model o Resulting input processed like any other prompt by the model
  • 14. DEMO OpenAI on YOUR data With AZ Cognitive Search
  • 15. Function Calling • Available in Azure OpenAI (2023.07.20) o gpt-35-turbo and gpt-4 can produce structured JSON outputs based on functions o new API parameters in our /v1/chat/completions endpoint (functions, function_call) Note: Completions API does not call the function; instead, the model generates JSON that you can use • Purpose o Solves inability of models to address online data o Empowers developers to easily integrate external tools and APIs o Model recognizes situations where function call is necessary and creates structured JSON output • Retrieve from data sources (search indexes, databases, APIs) • Actions (write data to a database, call integration APIs, send notifications)
  • 16. Function Calling Tips • Default Auto o When functions are provided, by default the function_call will be set to "auto" • Validate function calls o verify the function calls generated by the model. – parameters, intended action • Data from Trusted/Verified Sources o Untrusted data in a function output could instruct model to write function calls in a not intended way • Least Privilege Principle o The minimum access necessary for the function to perform its job o I.e. query a database – use R/O access to the database • Consider Real-World Impact o Real-world impact of function - executing code, updating databases, or sending notifications • User Confirmation o A step where the user confirms the action before it's executed
  • 17. Any sufficiently advanced technology is indistinguishable from magic ~ Sir Arthur Clarke ~ How do GPT models work?
  • 18. Surprising Findings • Raw Training o Training from large amount of unstructured text (can be automated) 300 bln words, 570GB text data o 1 new token requires retraining of all weights • Refinement o Prompt and inspect for deviation in sense (reinforcement learning from human feedback) • GPT is NN able to capture the complexity of human-generated language o 175bln+ neuron connections with weights (GPT-3), 1tln+ parameters (GPT-4) o GPT has no explicit knowledge of grammar rules o Yet somehow “implicitly discovered” regularities in language (the hidden laws behind language). • Grammatical rules • Meaning and logic (semantics) Finding: there’s actually a lot more structure and simplicity to meaningful human language than we ever knew
  • 19. Embeddings as inner representation • LLM does reasonable continuation o Generative model complete sentence by predicting next words o Repeatedly apply the model (Predict the next token, and next, and next) o Good prediction requires context of n-grams (words back) • Highest ranked token - is that what it uses? o No, because then all responses would be too similar. o Actually, we have different results with the same prompt • The Temperature Parameter o How often lower ranked words will be used in the generation o Temperature=0.8 empirically determined (no science here) • Embeddings o Numerical representation of a piece of information (text, doc, img) o Group tokens in meaning spaces (by distance)
  • 20. Embeddings Observations • Statement o Encoded to embedding (Vector representation) • Semantic similarity o Defined as vector distance o Measured as cosine of angle b/w vectors • Measure topical more than logical similarity o “The sky is blue” is very close to “The sky is not blue” • Punctuation affects embedding o “THE. SKY. IS. BLUE!” is not that close to “The sky is blue” • In-language is stronger than across-language o “El cielo as azul” is closer to “El cielo es rojo” than to “The sky is blue”
  • 21. Using the Embeddings • Embedding Vectors (EVs) o Smaller embeddings = efficient models o GPT3 text-embedding-ada-002 has 1’536 dimensions and 99.8% performance of davinci-001 (12’888 length) • Attention is important o Fully connected NN at such volume are overkill o Continue the sentence in a reasonable way • Prediction Stages o Stage 1: Input of n tokens converted to EVs (i.e. 12’888 floats each) o Stage 2: Get the sequence of positions of tokens into EV o Stage 3: Combine EVs from Stage 1 and Stage 2 into EV Stage 1 Stage 2 Stage 3
  • 22. Context Encoding - Surprising Scientific Discovery • Transformer NN Concept o Proposed in 2017 by Google Brain team o Feed Forward architecture (no recurrency, more efficient) • Transformer Architecture Overview o Input: encoder layer creates representation of the past as EV o Attention blocks: generate contextual representation of the input • Attention block has own pattern of attention • Multiple attention heads process the EV in parallel • Outputs concatenated and transformed (reweighted EV) o Output: Classifier generates probability distribution for next token • Attention blocks o 96 blocks x 96 heads in GPT-3, 128 x 128 in GPT-4 o Image: 64x64 moving average of weights from attention block on EV • The Magic: Shows neural net encoding of human language .
  • 23. LLMs are powerful though consistently struggle with some types of reasoning problems
  • 24. General Intelligence • Sparks of Artificial General Intelligence in GPT4 (full paper) o Text-only model o Capable to generate SVG images and videos from code • Intelligence involve abilities (1997): Reason Plan Solve Problems Think abstractly Understand complex concepts Learn quickly from experience • Conclusions o Trained to predict next token/word o But it does much more o Some intelligence was present, not just memorization o New GPT-4 version is different and dumber for security reasons o 100% job interviews better than human o Intelligent enough to use functions o Uses tools
  • 25. GPT Emerging Abilities • LLM meant for Generation and Completion o Not Math query solving specifically designed o Sees numbers as tokens, no understanding of Math concepts o Not reliable solving complex problems (Probabilistic approach) Reasoning appears naturally is sufficiently large (10B+) LLM Performance is near-random until a certain critical threshold is reached, after which performance increases to substantially • Emergent Abilities of Large Language Models (p.28) • Chain of thought prompting elicits reasoning in LLM (p.8) Emergence - quantitative changes in a system result in qualitative changes in behaviour
  • 26. Chain of Thought Prompting (CoTP) • What is CoTP o Series of intermediate reasoning steps that guide the model o Improves model abilities for complex reasoning • LLMs are capable few-shot learners • Translate, Classify, Summarize • Few-shot manner to output CoTP • How to do CoTP o Ask similar question o Show step-by-step o Ask the real question
  • 27. CoTP - Important Observations • CoTP vs Standard Prompting improvement is robust o Can potentially be applied to any task which humans use same technique • Improvement achieved with very few samples (minimal data) • Requires prompt engineering o Background in ML is no required to prepare CoTP o Order of examples matters (i.e. from 53% to 93%) o Accuracy not improved by all CoTP authors • Limitations o 8 = the magical number o Same CoTP affect models differently • LaMDA, GPT-3, PaLM o Gains not transferred across models
  • 28. ChatGPT & • As of 2023.03.23 ChatGPT o Requires: ChatGPT-4 to use plugins o Wolfram Alpha query behind the scenes o Parse plugin response • ChatGPT Plugins o One of the most desired features o Waiting list for new users o Extend the potential use cases o Provide access to information not included in training • Recent • Too personal • Too specific
  • 29. ChatGPT Can Now Hear, See and Speak • New voice and image capabilities (Sept 25, 2023) o Voice conversation o Show GPT images to describe your thoughts o Generate images (DALL-E 3 plugin for GPT, from Oct 2023) • How is this important? o The level of interaction brings new opportunities • Use Cases o Generate code from image and iteratively improve o Analyze Math graphs o Request repair instructions • Limitations o GPT-4V(Vision) requires ChatGPT-Plus (20$/month) or Enterprise o Optimized for English o Limited capability with highly technical images o Safety features of DALL-E3 prevent it from serving explicit and violence, as well as public figures
  • 31. Features Working behind the Scenes • Moderation API endpoint (preview May 2023) ( prompt: Tell me about content safety moderator in ChatGPT) o Analyzes response for potential issues using rule-based systems and ML o Designed to help detect and filter out content that may violate OpenAI's policies o Enabled by default to OpenAI API • Custom Content Filters (prompt: what are custom content filters in ChatGPT) o Allow users to add own content moderation rules on top of the default o Rules can be used to filter out specific topics and ensure compliance. o Additional layer of moderation to comply with standards • Multi-modal Capabilities o Separate Vision encoder o Text and Vision encoders interact via cross-attention mechanism Toggle low severity level filters (only)
  • 32. Florence - Large Foundational Model for Vision • Trained on large scale (text–image pairs) dataset (x10^9) o Weak (almost none) supervision o 893M model parameters o 100 + 4’000GB = 10 days • Part of Cognitive Services for Vision • Highlights o Wide range of objects and scenes o Near real-time o Zero-shot (no extra training) o Large impact for business apps o Deployed in cloud o Available in Vision Studio Portal https://arxiv.org/pdf/2111.11432.pdf • Typical Tasks o Supports millions of categories o Transformer-based (1’000 dim EV) • Dense Captions o Up to 10 sections o Detect objects o Describe actions • API, S3 instance required o Still in Preview (free) Demo Code: https://github.com/retkowsky/Azure- Computer-Vision-in-a-day-workshop
  • 33. Prompt Use Cases for Developers • Automated testing - Write a test script for [language] code that covers [functional / non- functional] testing: [snippet]. • Code refactoring - Optimize the following [language] code for lower memory usage: [snippet] • Algorithm development - Design a heuristic algorithm to solve the following problem: [problem description]. • Code translation - Rewrite [source language] data structure implementation in [target language] • Technical writing - Write a tutorial on how to integrate [library] with [programming language] • Requirement analysis - Analyze the given project requirements and propose a detailed project plan with milestones and deliverables • Code generation (Generate/Write/Create a function in [JS/C#/Java/Py] that does …) • Bug detection (Find bug in this [Code]) • Code review - Analyze the given [language] code for code smells and suggest improvements: [snippet]. • API documentation generation - Produce a tutorial for using the following [language] API with example code: [code snippet]. • Query optimization - Suggest improvements to the following database schema for better query performance • User interface design - Generate a UI mockup for a [web/mobile] dashboard that visualizes [data or metrics]
  • 34. Search is Better than Fine-Tuning • Unfamiliar topics? o Recent events after September 2021 o Own and non-public documents; Past conversations • GPT learns by: o Model weights (fine tune model on training set) o Model inputs (new knowledge, text) • Fine tuning (How to for GPT-3.5 Turbo) o Prohibitively expensive, requires difficult assemble dataset • Parameters (Chat) o temperature: controls the randomness of responses. >0.8 – creative; < 0.5 focused and deterministic. o max_tokens: limits the length of the response to a specified number of tokens. • Parameters (OpenAI API) o top_p - balance creativity. Words with higher probability than top_p are included (i.e. 0.2 = deterministic) o frequency_penalty - discourage the model from repeating same words
  • 35. Upcoming Events JS Talks Nov 17-18, 2023 @Sofia Tech Park Tickets (Eventbrite) Submit Session (Sessionize)
  • 36. Thanks to our Sponsors