Advertisement

GAIB Philippines - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx

Mar. 21, 2023
Advertisement

More Related Content

Advertisement

GAIB Philippines - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx

  1.  
  2.       
  3.        
  4.     
  5. Answer: Marcelo Chierighini of Brazil won the gold medal in the men's high jump at the 2020 Summer Olympics.

Editor's Notes

  1. OpenAI is a research organization that aims to promote and develop artificial intelligence (AI) tools that help humanity.  Its founders believed that AI had the potential to transform many aspects of society and improve people’s lives, but they also recognized the potential risks associated with the development and deployment of AI. OpenAI conducts research in a variety of areas related to AI, including machine learning, robotics, economics, and computer science. One of the organization’s notable achievements include the development of the GPT-3 natural language processing model and generative models including DALL-E and ChatGPT. In addition to its research efforts, OpenAI also works to educate the public about AI and its potential impacts and to promote the responsible development and use of AI in a way that is safe, beneficial, and ethical to humanity. 
  2. GPT-3 stands for General Pre-trained Transformer 3. It’s a massive artificial intelligence (AI) language model developed by OpenAI and released in June 2020. It’s based on their previous generative models, GPT and GPT-2, but it is much larger and more powerful – so much so that many experts consider it to be the cornerstone of the next generation of AI technology. GPT-3 contains 175 billion parameters – far more than all other AI language models combined – and as such, offers unparalleled capabilities in natural language processing, text analysis, and natural language understanding (NLU). GPT-3 has been pre-trained on a vast amount of text from the open internet. When given a prompt with just a few examples, it can often intuit what task you are trying to perform and generate a plausible completion. This is often called "few-shot learning.” Use cases: Produce effective marketing content quickly, assist support agents to improve customer experience, analyze and answer complex user queries. GPT-4 is also planned to be deployed in late 2023 and is expected to be 100 times more powerful that GPT-3.
  3. With ChatGPT you add a conversational AI layer which serves as more than just a text generator. ChatGPT It’s a text-generating technology that uses large neural networks to generate plausible responses to input text. It’s used to power interactive conversations, and has been tested as an application in customer support and marketing automation. ChatGPT is trained on massive datasets of conversations and can understand context to generate more natural-sounding responses than existing systems. The technology is based on the OpenAI GPT-3.5 model, which is an advanced version of the language models used in Google Translate and other AI applications. ChatGPT is designed with a two track approach: one track to output natural-sounding responses, and another track focused on generating appropriate content for the conversation at hand. As such, it can be trained with both conversational data (e.g., man-to-machine) as well as factual information (e.g., machine-to machine). This makes it particularly powerful for answering questions about topics like products or services – when paired with existing datasets related to those topics ChatGPT can output highly relevant information quickly and accurately without needing manual tuning. GPT-4 is also planned to be deployed in late 2023 and is expected to be 100 times more powerful that GPT-3.
  4. Fine-tuning is often necessitated for domain specific use-cases and increasing accuracy for a specific implementation in terms of jargon, industry specific terms, company specific products and services, etc. Fine-tuning improves on few-shot learning by training on many more examples than can fit in the prompt, letting you achieve better results on a wide number of tasks. Once a model has been fine-tuned, you won't need to provide examples in the prompt anymore. This saves costs and enables lower-latency requests.
  5. We will fine-tune an ada classifier to distinguish between the two sports: Baseball and Hockey. We can observe that we have 1197 examples in total, which are evenly split between the two sports.
  6. One sample from the baseball category can be seen above. It is an email to a mailing list.
  7. Training data is how you teach GPT-3 what you'd like it to say. In fine-tuning, each training example generally consists of a single input example and its associated output, without the need to give detailed instructions or include multiple examples in the same prompt. Your data must be a JSONL document, where each line is a prompt-completion pair corresponding to a training example. The prompt contains the email from the mailing list, and the completion is a name of the sport, either hockey or baseball. For demonstration purposes only and speed of fine-tuning we take only 300 examples. In a real use case the more examples the better the performance.
  8. Detailed feedback is given by OpenAI on the progress of the fine-tuning. Also, the fine-tune cost is given, this is especially important to ensure experimentation and testing does not generate exorbitant costs.
  9. If the stream is interrupted, you can restart it with this command.
  10. What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. 
  11. This is what's referred to as the model "hallucinating" an answer instead of just saying "I don't know" as a good AI should.
  12. An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. This technique can be used to augment GPT-3 with a large body of additional contextual information by using document embeddings and retrieval.
  13. The embedding is an information dense representation of the semantic meaning of a piece of text. Embeddings measure the relatedness of text strings Each embedding is a vector of floating point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar. . Embeddings are commonly used for: Search (where results are ranked by relevance to a query string) Clustering (where text strings are grouped by similarity) Recommendations (where items with related text strings are recommended) Anomaly detection (where outliers with little relatedness are identified) Diversity measurement (where similarity distributions are analyzed) Classification (where text strings are classified by their most similar label)
  14. Sections should be large enough to contain enough information to answer a question; but small enough to fit one or several into the GPT-3 prompt. We find that approximately a paragraph of text is usually a good length, but you should experiment for your particular use case. In this example, Wikipedia articles are already grouped into semantically related headers, so we will use these to define our sections. Here we obtain our data
  15. Now we preprocess the document sections by creating an embedding vector for each section.
  16. We can see that the most relevant document sections for each question include the summaries for the Men's and Women's high jump competitions - which is exactly what we would expect.
  17. GPT-3 is still in its infancy, so it's far from perfect. Yes, it delivers robust solutions, but it still has room to grow. Sam Altman, a founder of OpenAI, summed it nicely on Twitter. A few downsides to this powerful machine learning technology: Lack of true intelligence: GPT-3 is a deep learning model that uses machine learning algorithms, but it's still not "intelligence." This AI is only using existing text to predict future results—it's not necessarily coming up with anything truly original as it lacks true understanding and meaning (unlike something like Artificial General Intelligence (AGI)). Privacy risk: It's unclear whether GPT-3 retains any portion of the training data, making it a potential privacy issue. Bias: GPT-3 can be fooled into creating incorrect, racist, sexist, and biased content that's devoid of common sense and real-world sensibility. The model’s output is dependent on its input: garbage in, garbage out.
Advertisement