OpenAI API crash course

OpenAI API crash course
Learn how to build with "ChatGPT" for fun and profit
About me
Grew up in Rodos, Greece
Software Engineering at GU & Chalmers
Working with embedded systems
Teaching
DIT113, DAT265, Thesis supervision
C++ for professionals, Coursera
Open source projects
https://platis.solutions
https://github.com/platisd
Email: dimitris@platis.solutions
platisd/openai-pr-description
Automatically generate PR descriptions, focus on why not what
Chat completion API
phonix
Generate subtitles, more accurate than YouTube, LinkedIn etc
Speech to text API
sycophant
Write opinionated articles based on the latest news on a topic
Chat completion API, Image generation API
https://robots.army
About this workshop
Explore OpenAI APIs
Chat completion
Function calling
Creating the "perfect" prompt
Reducing costs
"ChatGPT API" or correctly: OpenAI API
OpenAI API is a collection of APIs
APIs offer access to various Large Language Models (LLMs)
LLM: Program trained to understand human language
ChatGPT is a web service using the Chat completion API
Uses gpt-3.5-turbo (free tier) or gpt-4.0 (paid tier)
OpenAI API endpoints
Chat completion
Given a series of messages, generate a response
Function calling: Choose which function to call
Image generation
Given a text description generate an image
Speech to text
Given an audio file and a prompt generate a transcript
Fine tuning
Train a model using input and output examples
ChatGPT aside, do you use any tools
based on OpenAI's models?
Getting started with OpenAI API
Install Python library
pip install --user openai
Get your API key
Login at platform.openai.com
Go to API keys
Create new secret key
(Optional) Create environment variable OPENAI_API_KEY with key
Ensure successful installation
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
models = openai.Model.list()["data"]
for model in models:
print(model["id"])
gpt-4
gpt-4-0314
curie-search-query
babbage-search-document
text-search-babbage-doc-001
babbage
...
Chat completion API
Given a series of messages, create a response. Important parameters:
model : Which LLM to use, balance cost, speed and performance
gpt-3.5-turbo : Cheaper & faster
gpt-4.0 : Better performance, larger input, less hallucinations
temperature : "Creativity" of the model's response
Lower values result in more deterministic responses
Allowed value range: [0.0, 2.0]
messages : A list of messages that represent the "conversation"
Each message needs a role and content properties
messages has the conversation for the model the provide a response.
messages = [
{"role": "system", "content": "You are an assistant that talks like a 15 yo"},
{"role": "user", "content": "Should I use goto statements?"},
{"role": "assistant", "content": "No bro, that's bad practice duh "},
{"role": "user", "content": user_input},
]
The first 3 elements of the list are the "context"
The last is the user's input we want the model to respond to
system role: High level instructions for the conversation
assistant role: The model's "ideal" (or previous) response
user role: The user's input
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
user_input = input("Enter your programing question: ")
messages = [
{"role": "system", "content": "You are an assistant that talks like a 15 yo"},
{"role": "user", "content": "Should I use goto statements?"},
{"role": "assistant", "content": "No bro, that's bad practice duh "},
{"role": "user", "content": user_input},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=messages, temperature=0.6
)
print(response.choices[0].message.content)
Enter your programing question: Is OOP good?
“
“
OMG, yes! Object-oriented programming is like, the bomb dot
com! It helps you organize your code and makes it easier to
understand and maintain. Plus, you can create cool objects and
stuff. So yeah, OOP is pretty rad!
“
“
system prompt
{"role": "system", "content": "You are an assistant that talks like a 15 yo"},
Changing content to You are a 15 year old programmer wouldn't
necessarily work as intended.
system prompts that clearly illustrate the context work better:
You are programmer and talk like a 15 yo
You are programmer and 15 years old
Yes, Object-Oriented Programming (OOP) is widely considered
to be a good programming paradigm. It promotes code
organization, reusability, and modularity.
“
“
temperature values
Higher values = More "creative" responses
Lower values = Less varied responses
Higher values = Lower probability to follow "instructions"
gpt-4.0 nonetheless better at following instructions
messages = [
{"role": "system", "content": "You are a helpful assistant"},
{
"role": "user",
"content": "Create a JSON object using months of the"
+ " year as keys and days of each month as values",
},
]
temperature=0.6
Consistently:
{
"January": 31,
"February": 28,
"March": 31,
"April": 30,
"May": 31,
"June": 30,
"July": 31,
"August": 31,
"September": 30,
"October": 31,
"November": 30,
"December": 31
}
temperature=1.9
Consistently not what we intended. One possible response:
```json
{
"January": 31,
"February": 28,
"March": 31,
"April": 30,
"May": 31,
"June": 30,
"July": 31,
"August": 31,
"September": 30,
"October": 31,
"November": 30,
"December": 31
}
```
Note that February has 28 days by default, in line with the regular Gregorian calendar ordering.
Is there anything else I can help you with?
Function calling - Let the model choose
def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temp,
functions=functions,
)
Pass functions to the ChatCompletion API
gpt-4 better results than gpt-3.5-turbo with "complex" prompts
Keep temperature rather low for higher focus, less hallucinations
functions parameter
[
{
"name": "call_staff",
"description": "Call a member of staff to the table",
"parameters": {"type": "object", "properties": {}},
}
]
Provide a list of functions to the model, it will choose which one to
call based on the context and the function descriptions.
functions = [
{
"name": "place_order",
"description": "Place an order for a pizza",
"parameters": {
"type": "object",
"properties": {
"name": { "type": "string", "description": "The name of the pizza, e.g. Pepperoni", },
"size": {
"type": "string",
"enum": ["small", "medium", "large"],
"description": "The size of the pizza. Always ask for clarification if not specified.",
},
"take_away": {
"type": "boolean",
"description": "Whether the pizza is taken away. Assume false if not specified.",
},
},
"required": ["name", "size", "take_away"],
},
},
]
name : The name of the function to call
description : A description of the function, helps the model choose
parameters/type : Always "object" for now
parameters/properties : Empty if no function parameters
<param>/type : "string" , "boolean" , "int"
<param>/enum : Optional list of allowed values
<param>/description : Description of parameter, helps the model
parameters/required : List of required parameters
system role prompt
messages = [
{
"role": "system",
"content": "Don't make assumptions about what values "
+ "to put into functions. Ask for clarification if you need to.",
},
{
"role": "system",
"content": "Only use the functions you have been provided with.",
},
]
Avoid "hallucinations" and stick to the provided functions.
The model will happily make things up!
API response
{
"role": "assistant",
"content": null,
"function_call": {
"name": "place_order",
"arguments": "{n"name": "Margherita",n"size": "medium",n"take_away": falsen}"
}
}
If model has decided a function call needs to be made, the response
will contain a function_call property and content will be null .
arguments is a string, not a JSON object
def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temp,
functions=functions,
)
message = response.choices[0].message
if "function_call" in message:
f = globals()[message["function_call"]["name"]]
params = json.loads(message["function_call"]["arguments"])
f(**params)
else:
print(message["content"])
return "function_call" in message
Remember to append the user's input to the messages list
so that the model can use previous input as context.
function_called = False
print("How may I help you?")
while not function_called:
user_input = input()
messages.append({"role": "user", "content": user_input})
function_called = get_chat(messages=messages, functions=functions)
Does not take much code to create a "chatbot" that calls functions
So it shouldn't take that long to develop it, right?
>> How may I help you?
<< I'd like to order some pizza please
>> Of course, I'd be happy to assist you with that. Could you
please specify the name of the pizza you'd like to order and the
size you prefer? Also, would this be for take away or are you
dining in?
<< I'd like a large one
>> Sure, could you please specify the type of pizza you would
like to order?
<< A Margherita please
>> Placing order for Margherita pizza, large to eat in
“
“
Conclusion
Chat completions API for summarizing, extending etc
More extensive than what we covered here
Can tightly integrate it with your system via function calling
gpt-3.5-turbo is often good enough and faster but can fall short
Use the Python library to get started quickly
Build your own wrapper around it if needed
Use OpenAI API Playground to experiment
Full "pizza bot" example: plat.is/openai-pizza
Prompting
The model is a black box, we cannot be certain how it will respond
In practice the previous, simplified, approach won't be enough
Prompting is evolving into its own "craft"
Hype around "prompt engineering" by "influencers"
Approach iteratively, follow best practices, it's not magic
Be specific - Outcome
Write a program that receives a video file and turns it into a GIF.
Write a program in Python that receives a video file as a
command line argument. Use the imageio library to turn it into a
GIF.
Be specific - Length, format and style
Summarize the following text: {text}
Summarize the following text in 3 sentences. Use your own words
and do not plagiarize: {text}
Be specific - Delimit the input
Summarize the following text: {text}
Summarize the following text: ```{text}```
Be specific - Output format
Summarize the following text: {text}
Summarize the following text as a JSON object where the key is
`summary` and the value is the summary: ```{text}```
Nudge the model
Given the diff of the pull request, focus on why the change is
needed.
`git diff`: ```{diff}```
This PR is needed because it will
Give examples - Start simple
In the following police report, provide a list of the names of the
people arrested.
Names:
Give examples - Simple didn't work? Provide more!
Provide a list of the names of the people arrested.
Report 1: Yesterday John Doe along with his friend Jane Doe were
arrested for trespassing. Officer Barbrady was the arresting officer.
Names 1: John Doe, Jane Doe
Report 2: This morning Bob Smith was arrested for speeding.
Witnesses including Robert Johnson and Mary Young saw the
incident.
Names 2: Bob Smith
Report 3: {report}
Names 3:
Provide steps for complex tasks
Write a program that receives a video file and turns it into a GIF.
1. Use Python
2. Use the imageio library
3. Read the command line argument `--video`
4. Check if the file exists
5. Read the file
6. Feed the file to `imageio`
7. Parse the `--output` command line argument
8. Write the GIF to the provided output path
Break down complex prompts and chain them
Write a short poem inspired by the characters in the following text
and generate 3 keywords for the poem: {text}
Write a short poem inspired by the characters in the following
text: {text}
Generate three keywords for the following poem: {poem}
Break down large prompts and chain them
Summarize the following texts into a single concise text:
Text 1: {potentially_long_text1}
Text 2: {potentially_long_text2}
Text 3: {potentially_long_text3}
Summarize the following text: {potentially_long_text1}
Summarize the following text: {potentially_long_text2}
Summarize the following text: {potentially_long_text3}
Summarize the following texts into a single concise text:
{summary1} {summary2} {summary3}
Conclusion
Don't believe the hype, ignore the influencers
Prompting is an iterative process, sometimes slow
Be specific, provide examples, break down complex prompts etc
Your prompts may need to be modified once you switch models
gpt-3.5-turbo to gpt-4.0 is not always "backwards compatible"
ChatGPT Prompt Engineering for Developers by deeplearning.ai
Best practices for prompt engineering with OpenAI API
Reducing costs
Using the OpenAI API is not free
Costs are based on the number of tokens used
1 token = 4 chars in English (on average)
Using other languages is more expensive
Costs pile up during development but explode in production
Use the cheapest model that works for your use case
Start with gpt-3.5-turbo and see if you can go even cheaper
Speech-to-text API and Image generation API are expensive
Investigate whether you can run Whisper locally
Fine tuning
Fine tuning is a way to "teach" the model to react to specific input
If your prompts contain a lot of examples, it might make sense
Fine tuning is expensive but so are large prompts
Paying extra for both fine tuning and calling the fine tuned model
Don't fine tune until you get really good results with prompting
Takeaways
LLMs will change the world for the better (not too dramatically)
Learning how to use them, not only as a developer, will be a must
Incorporating them in systems can provide additional value
Sentiment analysis, summarizing, transforming etc
Many more (fun) challenges when integrating an LLM in a system
Moderation, evaluation, reliability, ethics etc
Prompt engineering is an iterative process, potentially slow
1 of 42

More Related Content

Similar to OpenAI API crash course(20)

Python for scientific computingPython for scientific computing
Python for scientific computing
Go Asgard1.3K views
Be The API - VMware UserCon 2016Be The API - VMware UserCon 2016
Be The API - VMware UserCon 2016
Matthew Broberg321 views
Pemrograman Python untuk PemulaPemrograman Python untuk Pemula
Pemrograman Python untuk Pemula
Oon Arfiandwi8.6K views
How to Build a Site Using NickHow to Build a Site Using Nick
How to Build a Site Using Nick
Rob Gietema12 views
lecture 2.pptxlecture 2.pptx
lecture 2.pptx
Anonymous9etQKwW10 views
Mp24: The Bachelor, a facebook gameMp24: The Bachelor, a facebook game
Mp24: The Bachelor, a facebook game
Montreal Python1.2K views
Python (3).pdfPython (3).pdf
Python (3).pdf
samiwaris223 views
Visual Basic 6.0Visual Basic 6.0
Visual Basic 6.0
Palitha Baddegama6.4K views
Help with Pyhon Programming HomeworkHelp with Pyhon Programming Homework
Help with Pyhon Programming Homework
Helpmeinhomework70 views
Test Failed, Then...Test Failed, Then...
Test Failed, Then...
Toru Furukawa2.7K views
Python Novice to NinjaPython Novice to Ninja
Python Novice to Ninja
Al Sayed Gamal1.9K views
Python programming msc(cs)Python programming msc(cs)
Python programming msc(cs)
KALAISELVI P434 views
python training.docxpython training.docx
python training.docx
AkshitaYadav4922 views

More from Dimitrios Platis(10)

Builder pattern in C++.pdfBuilder pattern in C++.pdf
Builder pattern in C++.pdf
Dimitrios Platis97 views
Interprocess communication with C++.pdfInterprocess communication with C++.pdf
Interprocess communication with C++.pdf
Dimitrios Platis213 views
Lambda expressions in C++Lambda expressions in C++
Lambda expressions in C++
Dimitrios Platis42 views
Introduction to CMakeIntroduction to CMake
Introduction to CMake
Dimitrios Platis144 views
Pointer to implementation idiomPointer to implementation idiom
Pointer to implementation idiom
Dimitrios Platis127 views
[grcpp] Refactoring for testability c++[grcpp] Refactoring for testability c++
[grcpp] Refactoring for testability c++
Dimitrios Platis172 views
Refactoring for testability c++Refactoring for testability c++
Refactoring for testability c++
Dimitrios Platis151 views

Recently uploaded(20)

METHOD AND SYSTEM FOR PREDICTING OPTIMAL LOAD FOR WHICH THE YIELD IS MAXIMUM ...METHOD AND SYSTEM FOR PREDICTING OPTIMAL LOAD FOR WHICH THE YIELD IS MAXIMUM ...
METHOD AND SYSTEM FOR PREDICTING OPTIMAL LOAD FOR WHICH THE YIELD IS MAXIMUM ...
Prity Khastgir IPR Strategic India Patent Attorney Amplify Innovation24 views
Liqid: Composable CXL PreviewLiqid: Composable CXL Preview
Liqid: Composable CXL Preview
CXL Forum120 views
ThroughputThroughput
Throughput
Moisés Armani Ramírez31 views
Web Dev - 1 PPT.pdfWeb Dev - 1 PPT.pdf
Web Dev - 1 PPT.pdf
gdsczhcet49 views

OpenAI API crash course

  • 1. OpenAI API crash course Learn how to build with "ChatGPT" for fun and profit
  • 2. About me Grew up in Rodos, Greece Software Engineering at GU & Chalmers Working with embedded systems Teaching DIT113, DAT265, Thesis supervision C++ for professionals, Coursera Open source projects https://platis.solutions https://github.com/platisd Email: dimitris@platis.solutions
  • 3. platisd/openai-pr-description Automatically generate PR descriptions, focus on why not what Chat completion API phonix Generate subtitles, more accurate than YouTube, LinkedIn etc Speech to text API sycophant Write opinionated articles based on the latest news on a topic Chat completion API, Image generation API https://robots.army
  • 4. About this workshop Explore OpenAI APIs Chat completion Function calling Creating the "perfect" prompt Reducing costs
  • 5. "ChatGPT API" or correctly: OpenAI API OpenAI API is a collection of APIs APIs offer access to various Large Language Models (LLMs) LLM: Program trained to understand human language ChatGPT is a web service using the Chat completion API Uses gpt-3.5-turbo (free tier) or gpt-4.0 (paid tier)
  • 6. OpenAI API endpoints Chat completion Given a series of messages, generate a response Function calling: Choose which function to call Image generation Given a text description generate an image Speech to text Given an audio file and a prompt generate a transcript Fine tuning Train a model using input and output examples
  • 7. ChatGPT aside, do you use any tools based on OpenAI's models?
  • 8. Getting started with OpenAI API Install Python library pip install --user openai Get your API key Login at platform.openai.com Go to API keys Create new secret key (Optional) Create environment variable OPENAI_API_KEY with key
  • 9. Ensure successful installation import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") models = openai.Model.list()["data"] for model in models: print(model["id"]) gpt-4 gpt-4-0314 curie-search-query babbage-search-document text-search-babbage-doc-001 babbage ...
  • 10. Chat completion API Given a series of messages, create a response. Important parameters: model : Which LLM to use, balance cost, speed and performance gpt-3.5-turbo : Cheaper & faster gpt-4.0 : Better performance, larger input, less hallucinations temperature : "Creativity" of the model's response Lower values result in more deterministic responses Allowed value range: [0.0, 2.0] messages : A list of messages that represent the "conversation" Each message needs a role and content properties
  • 11. messages has the conversation for the model the provide a response. messages = [ {"role": "system", "content": "You are an assistant that talks like a 15 yo"}, {"role": "user", "content": "Should I use goto statements?"}, {"role": "assistant", "content": "No bro, that's bad practice duh "}, {"role": "user", "content": user_input}, ] The first 3 elements of the list are the "context" The last is the user's input we want the model to respond to system role: High level instructions for the conversation assistant role: The model's "ideal" (or previous) response user role: The user's input
  • 12. import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") user_input = input("Enter your programing question: ") messages = [ {"role": "system", "content": "You are an assistant that talks like a 15 yo"}, {"role": "user", "content": "Should I use goto statements?"}, {"role": "assistant", "content": "No bro, that's bad practice duh "}, {"role": "user", "content": user_input}, ] response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=messages, temperature=0.6 ) print(response.choices[0].message.content)
  • 13. Enter your programing question: Is OOP good? “ “ OMG, yes! Object-oriented programming is like, the bomb dot com! It helps you organize your code and makes it easier to understand and maintain. Plus, you can create cool objects and stuff. So yeah, OOP is pretty rad! “ “
  • 14. system prompt {"role": "system", "content": "You are an assistant that talks like a 15 yo"}, Changing content to You are a 15 year old programmer wouldn't necessarily work as intended. system prompts that clearly illustrate the context work better: You are programmer and talk like a 15 yo You are programmer and 15 years old Yes, Object-Oriented Programming (OOP) is widely considered to be a good programming paradigm. It promotes code organization, reusability, and modularity. “ “
  • 15. temperature values Higher values = More "creative" responses Lower values = Less varied responses Higher values = Lower probability to follow "instructions" gpt-4.0 nonetheless better at following instructions messages = [ {"role": "system", "content": "You are a helpful assistant"}, { "role": "user", "content": "Create a JSON object using months of the" + " year as keys and days of each month as values", }, ]
  • 16. temperature=0.6 Consistently: { "January": 31, "February": 28, "March": 31, "April": 30, "May": 31, "June": 30, "July": 31, "August": 31, "September": 30, "October": 31, "November": 30, "December": 31 }
  • 17. temperature=1.9 Consistently not what we intended. One possible response: ```json { "January": 31, "February": 28, "March": 31, "April": 30, "May": 31, "June": 30, "July": 31, "August": 31, "September": 30, "October": 31, "November": 30, "December": 31 } ``` Note that February has 28 days by default, in line with the regular Gregorian calendar ordering. Is there anything else I can help you with?
  • 18. Function calling - Let the model choose def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None): response = openai.ChatCompletion.create( model=model, messages=messages, temperature=temp, functions=functions, ) Pass functions to the ChatCompletion API gpt-4 better results than gpt-3.5-turbo with "complex" prompts Keep temperature rather low for higher focus, less hallucinations
  • 19. functions parameter [ { "name": "call_staff", "description": "Call a member of staff to the table", "parameters": {"type": "object", "properties": {}}, } ] Provide a list of functions to the model, it will choose which one to call based on the context and the function descriptions.
  • 20. functions = [ { "name": "place_order", "description": "Place an order for a pizza", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "The name of the pizza, e.g. Pepperoni", }, "size": { "type": "string", "enum": ["small", "medium", "large"], "description": "The size of the pizza. Always ask for clarification if not specified.", }, "take_away": { "type": "boolean", "description": "Whether the pizza is taken away. Assume false if not specified.", }, }, "required": ["name", "size", "take_away"], }, }, ]
  • 21. name : The name of the function to call description : A description of the function, helps the model choose parameters/type : Always "object" for now parameters/properties : Empty if no function parameters <param>/type : "string" , "boolean" , "int" <param>/enum : Optional list of allowed values <param>/description : Description of parameter, helps the model parameters/required : List of required parameters
  • 22. system role prompt messages = [ { "role": "system", "content": "Don't make assumptions about what values " + "to put into functions. Ask for clarification if you need to.", }, { "role": "system", "content": "Only use the functions you have been provided with.", }, ] Avoid "hallucinations" and stick to the provided functions. The model will happily make things up!
  • 23. API response { "role": "assistant", "content": null, "function_call": { "name": "place_order", "arguments": "{n"name": "Margherita",n"size": "medium",n"take_away": falsen}" } } If model has decided a function call needs to be made, the response will contain a function_call property and content will be null . arguments is a string, not a JSON object
  • 24. def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None): response = openai.ChatCompletion.create( model=model, messages=messages, temperature=temp, functions=functions, ) message = response.choices[0].message if "function_call" in message: f = globals()[message["function_call"]["name"]] params = json.loads(message["function_call"]["arguments"]) f(**params) else: print(message["content"]) return "function_call" in message
  • 25. Remember to append the user's input to the messages list so that the model can use previous input as context. function_called = False print("How may I help you?") while not function_called: user_input = input() messages.append({"role": "user", "content": user_input}) function_called = get_chat(messages=messages, functions=functions) Does not take much code to create a "chatbot" that calls functions So it shouldn't take that long to develop it, right?
  • 26. >> How may I help you? << I'd like to order some pizza please >> Of course, I'd be happy to assist you with that. Could you please specify the name of the pizza you'd like to order and the size you prefer? Also, would this be for take away or are you dining in? << I'd like a large one >> Sure, could you please specify the type of pizza you would like to order? << A Margherita please >> Placing order for Margherita pizza, large to eat in “ “
  • 27. Conclusion Chat completions API for summarizing, extending etc More extensive than what we covered here Can tightly integrate it with your system via function calling gpt-3.5-turbo is often good enough and faster but can fall short Use the Python library to get started quickly Build your own wrapper around it if needed Use OpenAI API Playground to experiment Full "pizza bot" example: plat.is/openai-pizza
  • 28. Prompting The model is a black box, we cannot be certain how it will respond In practice the previous, simplified, approach won't be enough Prompting is evolving into its own "craft" Hype around "prompt engineering" by "influencers" Approach iteratively, follow best practices, it's not magic
  • 29. Be specific - Outcome Write a program that receives a video file and turns it into a GIF. Write a program in Python that receives a video file as a command line argument. Use the imageio library to turn it into a GIF.
  • 30. Be specific - Length, format and style Summarize the following text: {text} Summarize the following text in 3 sentences. Use your own words and do not plagiarize: {text}
  • 31. Be specific - Delimit the input Summarize the following text: {text} Summarize the following text: ```{text}```
  • 32. Be specific - Output format Summarize the following text: {text} Summarize the following text as a JSON object where the key is `summary` and the value is the summary: ```{text}```
  • 33. Nudge the model Given the diff of the pull request, focus on why the change is needed. `git diff`: ```{diff}``` This PR is needed because it will
  • 34. Give examples - Start simple In the following police report, provide a list of the names of the people arrested. Names:
  • 35. Give examples - Simple didn't work? Provide more! Provide a list of the names of the people arrested. Report 1: Yesterday John Doe along with his friend Jane Doe were arrested for trespassing. Officer Barbrady was the arresting officer. Names 1: John Doe, Jane Doe Report 2: This morning Bob Smith was arrested for speeding. Witnesses including Robert Johnson and Mary Young saw the incident. Names 2: Bob Smith Report 3: {report} Names 3:
  • 36. Provide steps for complex tasks Write a program that receives a video file and turns it into a GIF. 1. Use Python 2. Use the imageio library 3. Read the command line argument `--video` 4. Check if the file exists 5. Read the file 6. Feed the file to `imageio` 7. Parse the `--output` command line argument 8. Write the GIF to the provided output path
  • 37. Break down complex prompts and chain them Write a short poem inspired by the characters in the following text and generate 3 keywords for the poem: {text} Write a short poem inspired by the characters in the following text: {text} Generate three keywords for the following poem: {poem}
  • 38. Break down large prompts and chain them Summarize the following texts into a single concise text: Text 1: {potentially_long_text1} Text 2: {potentially_long_text2} Text 3: {potentially_long_text3} Summarize the following text: {potentially_long_text1} Summarize the following text: {potentially_long_text2} Summarize the following text: {potentially_long_text3} Summarize the following texts into a single concise text: {summary1} {summary2} {summary3}
  • 39. Conclusion Don't believe the hype, ignore the influencers Prompting is an iterative process, sometimes slow Be specific, provide examples, break down complex prompts etc Your prompts may need to be modified once you switch models gpt-3.5-turbo to gpt-4.0 is not always "backwards compatible" ChatGPT Prompt Engineering for Developers by deeplearning.ai Best practices for prompt engineering with OpenAI API
  • 40. Reducing costs Using the OpenAI API is not free Costs are based on the number of tokens used 1 token = 4 chars in English (on average) Using other languages is more expensive Costs pile up during development but explode in production Use the cheapest model that works for your use case Start with gpt-3.5-turbo and see if you can go even cheaper Speech-to-text API and Image generation API are expensive Investigate whether you can run Whisper locally
  • 41. Fine tuning Fine tuning is a way to "teach" the model to react to specific input If your prompts contain a lot of examples, it might make sense Fine tuning is expensive but so are large prompts Paying extra for both fine tuning and calling the fine tuned model Don't fine tune until you get really good results with prompting
  • 42. Takeaways LLMs will change the world for the better (not too dramatically) Learning how to use them, not only as a developer, will be a must Incorporating them in systems can provide additional value Sentiment analysis, summarizing, transforming etc Many more (fun) challenges when integrating an LLM in a system Moderation, evaluation, reliability, ethics etc Prompt engineering is an iterative process, potentially slow