Here is some text with PII:
"Emmanuel Ameisen is a Research Engineer at Anthropic. He can be reached at 925-123-456 or emmanuel@anthropic.com"
Please remove all personally identifiable information (PII) from the text. PII includes things like names, email addresses, phone numbers, etc.
How would you rewrite the text with all PII removed?
Prompt engineering involves crafting prompts to elicit specific responses from language models. Key components of prompt engineering include clarity, length, context setting, question phrasing, formatting, temperature and max tokens, context length, using prompts in series, task specification, and ethical considerations. Prompt engineering requires a thoughtful approach to guide models and generate accurate and useful outputs.
An expert in prompt engineering provides guidelines on designing effective prompts for natural language models. The document discusses prompt engineering principles, what makes a good prompt, and various prompt frameworks including priming, focused prompts, and practical everyday prompts. Effective prompts are clear, concise, unambiguous, and provide the necessary context and task to generate a desired response from a model. Iteration and adapting the prompt based on the response is important.
The document discusses various use cases for learning ChatGPT through prompts provided in the book "The art of Prompt Engineering with ChatGPT". These use cases include brainstorming ideas in a table, translating a poem from Marathi to English, summarizing content for children, writing articles and blogs, academic writing, drafting emails, learning to code with Python, finding recipes based on available ingredients, and noting important points about ChatGPT's capabilities and limitations. The document provides examples of prompts and ChatGPT's responses for each use case.
Prompt engineering is a technique in artificial intelligence to get AI models like ChatGPT to respond correctly to our needs. The 5W1H framework can be used to get good results from ChatGPT by structuring prompts around what, who, why, where, which, and how. Prompts should provide context on what is expected from the AI, who the context is for, why the generated content is needed, where it will be used, which additional information is required, and how the output should be formatted. Well-structured prompts using this framework can elicit high-quality responses from ChatGPT.
Prompt engineering is a fundamental concept within the field of artificial intelligence, with particular relevance to natural language processing. It involves the strategic embedding of task descriptions within the input data of an AI system, often in the form of a question or query, as opposed to explicitly providing the task description separately. This approach optimizes the efficiency and effectiveness of AI models by encapsulating the desired outcome within the input context, thereby enabling more streamlined and context-aware responses.
ChatGPT is a powerful language model developed by OpenAI. It is designed to generate human-like text based on given prompts. As a prompt engineer, you can utilize ChatGPT to create engaging conversations, provide information, answer questions, and assist users. It's a versatile tool for natural language processing tasks, enabling more interactive and intelligent interactions.
Prompt engineering is the practice of designing and refining specific text prompts to guide transformer-based language models, such as Large Language Models (LLMs), in generating desired outputs. It involves crafting clear and specific instructions and allowing the model sufficient time to process information. By carefully engineering prompts, practitioners can harness the capabilities of LLMs to achieve different goals.
Prompt engineering involves crafting prompts to elicit specific responses from language models. Key components of prompt engineering include clarity, length, context setting, question phrasing, formatting, temperature and max tokens, context length, using prompts in series, task specification, and ethical considerations. Prompt engineering requires a thoughtful approach to guide models and generate accurate and useful outputs.
An expert in prompt engineering provides guidelines on designing effective prompts for natural language models. The document discusses prompt engineering principles, what makes a good prompt, and various prompt frameworks including priming, focused prompts, and practical everyday prompts. Effective prompts are clear, concise, unambiguous, and provide the necessary context and task to generate a desired response from a model. Iteration and adapting the prompt based on the response is important.
The document discusses various use cases for learning ChatGPT through prompts provided in the book "The art of Prompt Engineering with ChatGPT". These use cases include brainstorming ideas in a table, translating a poem from Marathi to English, summarizing content for children, writing articles and blogs, academic writing, drafting emails, learning to code with Python, finding recipes based on available ingredients, and noting important points about ChatGPT's capabilities and limitations. The document provides examples of prompts and ChatGPT's responses for each use case.
Prompt engineering is a technique in artificial intelligence to get AI models like ChatGPT to respond correctly to our needs. The 5W1H framework can be used to get good results from ChatGPT by structuring prompts around what, who, why, where, which, and how. Prompts should provide context on what is expected from the AI, who the context is for, why the generated content is needed, where it will be used, which additional information is required, and how the output should be formatted. Well-structured prompts using this framework can elicit high-quality responses from ChatGPT.
Prompt engineering is a fundamental concept within the field of artificial intelligence, with particular relevance to natural language processing. It involves the strategic embedding of task descriptions within the input data of an AI system, often in the form of a question or query, as opposed to explicitly providing the task description separately. This approach optimizes the efficiency and effectiveness of AI models by encapsulating the desired outcome within the input context, thereby enabling more streamlined and context-aware responses.
ChatGPT is a powerful language model developed by OpenAI. It is designed to generate human-like text based on given prompts. As a prompt engineer, you can utilize ChatGPT to create engaging conversations, provide information, answer questions, and assist users. It's a versatile tool for natural language processing tasks, enabling more interactive and intelligent interactions.
Prompt engineering is the practice of designing and refining specific text prompts to guide transformer-based language models, such as Large Language Models (LLMs), in generating desired outputs. It involves crafting clear and specific instructions and allowing the model sufficient time to process information. By carefully engineering prompts, practitioners can harness the capabilities of LLMs to achieve different goals.
Here are the key steps in the ChatIE framework:
1. The user provides a text document and specifies the information extraction task (e.g. entity extraction, relation extraction) through natural language.
2. ChatGPT understands the task and responds with the extracted information by highlighting the relevant entities/relations in the text.
3. The user can interactively give feedback to ChatGPT to refine its understanding of the task and extraction.
4. ChatGPT learns from the feedback to improve its extraction for future conversations.
The framework aims to leverage ChatGPT's strengths in natural language understanding and generation for zero-shot information extraction via human-AI collaboration. The interactive feedback also helps address Chat
In the US, people are already implementing the use of converstaionl AI, ChatGPT in everydy mundane tasks. Implementation is not only limited to that. Various industries are also using this revolutionary technology for maintaining a superior customer experience. People are also criticizing ChatGPT for creating employment threats and also being unethical in it's answers. The technology is being widely applauded but everything has certain pain points associated with it.
Seminar on ChatGPT Large Language Model by Abhilash Majumder(Intel)
This presentation is solely for reading purposes and contains technical details about ChatGPT fundamentals
Tech adoption for AI ML has been rapidly growing over the globe and ChatGPT is the game changer. Artificial intelligence and Machine learning are uplifting internet era with swift solutions for users. https://www.9series.com/blog/revolutionary-chatgpt/
This document discusses Peter Purgathofer's presentation on chatGPT and the implications of conversational AI. It includes sections on Ludwig Wittgenstein's work at TU Wien, a worksheet, and a comparison of two abstracts. The document concludes with a question about where current conversational AI technology falls in relation to future progress.
This document discusses AI and ChatGPT. It begins with an introduction to David Cieslak and his company RKL eSolutions, which provides ERP sales and consulting. It then provides definitions for key AI concepts like artificial intelligence, generative AI, large language models, and ChatGPT. The document discusses OpenAI's ChatGPT tool and how it works. It covers prompts, commands, and potential uses and impacts of generative AI technologies. Finally, it discusses concerns regarding generative AI and the future of life institute's call for more oversight of advanced AI.
This document summarizes a presentation given by Professor Pekka Abrahamsson on how ChatGPT and AI-assisted coding is profoundly changing software engineering. The presentation covers several key points:
- ChatGPT and AI tools like Copilot are beginning to be adopted in software engineering to provide code snippets, answers to technical questions, and assist with debugging, but issues around code ownership, reliability, and security need to be addressed.
- Early studies show potential benefits of ChatGPT for tasks like software testing education, code quality improvement, and requirements elicitation, but more research is still needed.
- Prompt engineering techniques can help maximize the usefulness of ChatGPT for software engineering tasks. Overall, AI
CHATGPT is a large language model chatbot developed by OpenAI. It is a powerful tool that can be used for a variety of tasks, including:
Generating text: CHATGPT can generate text in a variety of styles, including news articles, blog posts, creative writing, and even code.
Translating languages: CHATGPT can translate between over 100 languages.
Answering questions: CHATGPT can answer questions about a wide range of topics, including science, history, and current events.
Writing different kinds of creative content: CHATGPT can write different kinds of creative content, such as poems, code, scripts, musical pieces, email, letters, etc.
CHATGPT is still under development, but it has learned to perform many kinds of tasks. It is a powerful tool that can be used for a variety of purposes.
Here are some tips for using CHATGPT:
Be specific in your requests: The more specific you are in your requests, the better CHATGPT will be able to understand what you want.
Use natural language: CHATGPT is trained on a massive dataset of text, so it can understand natural language.
Be patient: CHATGPT is still under development, so it may not always be able to generate perfect results.
Overall, CHATGPT is a powerful tool that can be used for a variety of tasks. If you are looking for a chatbot that can generate text, translate languages, answer questions, or write different kinds of creative content, CHATGPT is a good option.
ChatGPT is a natural language processing model developed by OpenAI that can generate human-like text in response to user inputs. The document discusses ChatGPT's capabilities and limitations, including its applications in areas like customer service, education, and entertainment. However, the document also notes that ChatGPT is still undergoing training, its responses may be inaccurate at times, and it cannot match the emotional expressiveness of human interactions.
This document discusses various uses of the ChatGPT AI assistant tool. It describes how ChatGPT can be used as a virtual Linux terminal, debug code, write code in different programming languages, play tic-tac-toe, explain concepts, provide ideas for art/decorations/parties, answer homework questions, write music, perform translations, extract data from text, grade essays, and solve math questions. The document provides examples of interacting with ChatGPT to demonstrate these various capabilities.
AI and the Researcher: ChatGPT and DALL-E in Scholarly Writing and PublishingErin Owens
The artificial intelligence tool ChatGPT has taken the world by storm, prompting concerns about student plagiarism. But A.I. text and image generators also pose ethical and legal conundrums for scholarly researchers. This session will delve into some of the emerging issues and developments that may affect faculty in scholarly writing and publishing.
The document discusses ChatGPT, an AI assistant created by OpenAI to be helpful, harmless, and honest. It provides an overview of ChatGPT's capabilities, including uses for tasks like translation, creativity, and academic writing through activities like paper reviewing and topic finding. The document tests ChatGPT by having it review one of the author's own publications and examines methods for detecting AI-generated text.
GENERATIVE AI, THE FUTURE OF PRODUCTIVITYAndre Muscat
Discuss the impact and opportunity of using Generative AI to support your development and creative teams
* Explore business challenges in content creation
* Cost-per-unit of different types of content
* Use AI to reduce cost-per-unit
* New partnerships being formed that will have a material impact on the way we search and engage with content
Part 4 of a 9 Part Research Series named "What matters in AI" published on www.andremuscat.com
ChatGPT is a language model created by OpenAI that can carry on conversations, answer questions, and summarize text through natural language generation. It was trained on a large dataset of conversational text from various online sources to understand and generate human-like responses. While ChatGPT can perform tasks like translation, conversation, and summarization, it also has limitations since it may demonstrate biases from its training data and lacks full human-level context and common sense understanding. Users can get started with ChatGPT by signing up on the website and exploring example queries to learn its capabilities and functionality.
The document discusses Amazon SageMaker, a fully managed machine learning platform. It introduces several new Amazon SageMaker capabilities: Amazon SageMaker Studio, which provides an integrated development environment for machine learning; Amazon SageMaker Notebooks for easier collaboration; Amazon SageMaker Processing for automated data processing and model evaluation; Amazon SageMaker Experiments for organizing and comparing training experiments; Amazon SageMaker Debugger for automated debugging of machine learning models; Amazon SageMaker Model Monitor for continuous monitoring of models in production; and Amazon SageMaker Autopilot for automated machine learning without writing code. It also discusses how Amazon SageMaker addresses challenges in deploying and managing machine learning models at scale.
ChatGPT is a highly advanced language model developed by OpenAI. Its ability to understand and respond to natural language input can be a valuable tool for mobile application developers looking to streamline their workflow and improve their app development process.
The document discusses advances in large language models from GPT-1 to the potential capabilities of GPT-4, including its ability to simulate human behavior, demonstrate sparks of artificial general intelligence, and generate virtual identities. It also provides tips on how to effectively prompt ChatGPT through techniques like prompt engineering, giving context and examples, and different response formats.
Montreal Girl Geeks: Building the Modern WebRachel Andrew
The document discusses Rachel Andrew's experience building the modern web. It summarizes that Rachel found community and a new career through learning HTML and sharing her knowledge of building websites. Over time, the web became more standardized and accessible, though complexity has also increased with various frameworks abstracting the core technologies of HTML, CSS, and JavaScript. Rachel advocates for developing strong fundamental skills in the core technologies rather than relying too heavily on frameworks.
How Does Generative AI Actually Work? (a quick semi-technical introduction to...ssuser4edc93
This document provides a technical introduction to large language models (LLMs). It explains that LLMs are based on simple probabilities derived from their massive training corpora, containing trillions of examples. The document then discusses several key aspects of how LLMs work, including that they function as a form of "lossy text compression" by encoding patterns and relationships in their training data. It also outlines some of the key elements in the architecture and training of the most advanced LLMs, such as GPT-4, focusing on their huge scale, transformer architecture, and use of reinforcement learning from human feedback.
IELTS is an international standardized test of English language proficiency that is accepted by universities and employers worldwide. It measures ability in listening, reading, writing and speaking. There are two versions - academic, for university entrance, and general training, for work or immigration purposes. Each section is scored and a band score from 1-9 is given. Proper preparation is important, with a focus on developing test-taking skills like scanning, skimming and time management. Strong writing involves clear organization, examples and adherence to word counts. Listening requires understanding question types and circling key details. Speaking evaluates fluency and ability to discuss topics. Regular practice tests help maximize scores on the four IELTS components.
This document provides information and guidance about writing the essay portion of the TOEFL exam. It discusses six patterns of development that can be used to structure TOEFL essays: comparison-contrast, definition, classification, process analysis, cause-effect, and argument. For each pattern, it provides an outline highlighting the key elements to include, such as topic sentences, transitional phrases, and evidentiary statements. It also contains ten sample TOEFL essays responding to different prompts. The goal is to familiarize test-takers with the expected essay structures and formats in order to help them achieve a high score.
Here are the key steps in the ChatIE framework:
1. The user provides a text document and specifies the information extraction task (e.g. entity extraction, relation extraction) through natural language.
2. ChatGPT understands the task and responds with the extracted information by highlighting the relevant entities/relations in the text.
3. The user can interactively give feedback to ChatGPT to refine its understanding of the task and extraction.
4. ChatGPT learns from the feedback to improve its extraction for future conversations.
The framework aims to leverage ChatGPT's strengths in natural language understanding and generation for zero-shot information extraction via human-AI collaboration. The interactive feedback also helps address Chat
In the US, people are already implementing the use of converstaionl AI, ChatGPT in everydy mundane tasks. Implementation is not only limited to that. Various industries are also using this revolutionary technology for maintaining a superior customer experience. People are also criticizing ChatGPT for creating employment threats and also being unethical in it's answers. The technology is being widely applauded but everything has certain pain points associated with it.
Seminar on ChatGPT Large Language Model by Abhilash Majumder(Intel)
This presentation is solely for reading purposes and contains technical details about ChatGPT fundamentals
Tech adoption for AI ML has been rapidly growing over the globe and ChatGPT is the game changer. Artificial intelligence and Machine learning are uplifting internet era with swift solutions for users. https://www.9series.com/blog/revolutionary-chatgpt/
This document discusses Peter Purgathofer's presentation on chatGPT and the implications of conversational AI. It includes sections on Ludwig Wittgenstein's work at TU Wien, a worksheet, and a comparison of two abstracts. The document concludes with a question about where current conversational AI technology falls in relation to future progress.
This document discusses AI and ChatGPT. It begins with an introduction to David Cieslak and his company RKL eSolutions, which provides ERP sales and consulting. It then provides definitions for key AI concepts like artificial intelligence, generative AI, large language models, and ChatGPT. The document discusses OpenAI's ChatGPT tool and how it works. It covers prompts, commands, and potential uses and impacts of generative AI technologies. Finally, it discusses concerns regarding generative AI and the future of life institute's call for more oversight of advanced AI.
This document summarizes a presentation given by Professor Pekka Abrahamsson on how ChatGPT and AI-assisted coding is profoundly changing software engineering. The presentation covers several key points:
- ChatGPT and AI tools like Copilot are beginning to be adopted in software engineering to provide code snippets, answers to technical questions, and assist with debugging, but issues around code ownership, reliability, and security need to be addressed.
- Early studies show potential benefits of ChatGPT for tasks like software testing education, code quality improvement, and requirements elicitation, but more research is still needed.
- Prompt engineering techniques can help maximize the usefulness of ChatGPT for software engineering tasks. Overall, AI
CHATGPT is a large language model chatbot developed by OpenAI. It is a powerful tool that can be used for a variety of tasks, including:
Generating text: CHATGPT can generate text in a variety of styles, including news articles, blog posts, creative writing, and even code.
Translating languages: CHATGPT can translate between over 100 languages.
Answering questions: CHATGPT can answer questions about a wide range of topics, including science, history, and current events.
Writing different kinds of creative content: CHATGPT can write different kinds of creative content, such as poems, code, scripts, musical pieces, email, letters, etc.
CHATGPT is still under development, but it has learned to perform many kinds of tasks. It is a powerful tool that can be used for a variety of purposes.
Here are some tips for using CHATGPT:
Be specific in your requests: The more specific you are in your requests, the better CHATGPT will be able to understand what you want.
Use natural language: CHATGPT is trained on a massive dataset of text, so it can understand natural language.
Be patient: CHATGPT is still under development, so it may not always be able to generate perfect results.
Overall, CHATGPT is a powerful tool that can be used for a variety of tasks. If you are looking for a chatbot that can generate text, translate languages, answer questions, or write different kinds of creative content, CHATGPT is a good option.
ChatGPT is a natural language processing model developed by OpenAI that can generate human-like text in response to user inputs. The document discusses ChatGPT's capabilities and limitations, including its applications in areas like customer service, education, and entertainment. However, the document also notes that ChatGPT is still undergoing training, its responses may be inaccurate at times, and it cannot match the emotional expressiveness of human interactions.
This document discusses various uses of the ChatGPT AI assistant tool. It describes how ChatGPT can be used as a virtual Linux terminal, debug code, write code in different programming languages, play tic-tac-toe, explain concepts, provide ideas for art/decorations/parties, answer homework questions, write music, perform translations, extract data from text, grade essays, and solve math questions. The document provides examples of interacting with ChatGPT to demonstrate these various capabilities.
AI and the Researcher: ChatGPT and DALL-E in Scholarly Writing and PublishingErin Owens
The artificial intelligence tool ChatGPT has taken the world by storm, prompting concerns about student plagiarism. But A.I. text and image generators also pose ethical and legal conundrums for scholarly researchers. This session will delve into some of the emerging issues and developments that may affect faculty in scholarly writing and publishing.
The document discusses ChatGPT, an AI assistant created by OpenAI to be helpful, harmless, and honest. It provides an overview of ChatGPT's capabilities, including uses for tasks like translation, creativity, and academic writing through activities like paper reviewing and topic finding. The document tests ChatGPT by having it review one of the author's own publications and examines methods for detecting AI-generated text.
GENERATIVE AI, THE FUTURE OF PRODUCTIVITYAndre Muscat
Discuss the impact and opportunity of using Generative AI to support your development and creative teams
* Explore business challenges in content creation
* Cost-per-unit of different types of content
* Use AI to reduce cost-per-unit
* New partnerships being formed that will have a material impact on the way we search and engage with content
Part 4 of a 9 Part Research Series named "What matters in AI" published on www.andremuscat.com
ChatGPT is a language model created by OpenAI that can carry on conversations, answer questions, and summarize text through natural language generation. It was trained on a large dataset of conversational text from various online sources to understand and generate human-like responses. While ChatGPT can perform tasks like translation, conversation, and summarization, it also has limitations since it may demonstrate biases from its training data and lacks full human-level context and common sense understanding. Users can get started with ChatGPT by signing up on the website and exploring example queries to learn its capabilities and functionality.
The document discusses Amazon SageMaker, a fully managed machine learning platform. It introduces several new Amazon SageMaker capabilities: Amazon SageMaker Studio, which provides an integrated development environment for machine learning; Amazon SageMaker Notebooks for easier collaboration; Amazon SageMaker Processing for automated data processing and model evaluation; Amazon SageMaker Experiments for organizing and comparing training experiments; Amazon SageMaker Debugger for automated debugging of machine learning models; Amazon SageMaker Model Monitor for continuous monitoring of models in production; and Amazon SageMaker Autopilot for automated machine learning without writing code. It also discusses how Amazon SageMaker addresses challenges in deploying and managing machine learning models at scale.
ChatGPT is a highly advanced language model developed by OpenAI. Its ability to understand and respond to natural language input can be a valuable tool for mobile application developers looking to streamline their workflow and improve their app development process.
The document discusses advances in large language models from GPT-1 to the potential capabilities of GPT-4, including its ability to simulate human behavior, demonstrate sparks of artificial general intelligence, and generate virtual identities. It also provides tips on how to effectively prompt ChatGPT through techniques like prompt engineering, giving context and examples, and different response formats.
Montreal Girl Geeks: Building the Modern WebRachel Andrew
The document discusses Rachel Andrew's experience building the modern web. It summarizes that Rachel found community and a new career through learning HTML and sharing her knowledge of building websites. Over time, the web became more standardized and accessible, though complexity has also increased with various frameworks abstracting the core technologies of HTML, CSS, and JavaScript. Rachel advocates for developing strong fundamental skills in the core technologies rather than relying too heavily on frameworks.
How Does Generative AI Actually Work? (a quick semi-technical introduction to...ssuser4edc93
This document provides a technical introduction to large language models (LLMs). It explains that LLMs are based on simple probabilities derived from their massive training corpora, containing trillions of examples. The document then discusses several key aspects of how LLMs work, including that they function as a form of "lossy text compression" by encoding patterns and relationships in their training data. It also outlines some of the key elements in the architecture and training of the most advanced LLMs, such as GPT-4, focusing on their huge scale, transformer architecture, and use of reinforcement learning from human feedback.
IELTS is an international standardized test of English language proficiency that is accepted by universities and employers worldwide. It measures ability in listening, reading, writing and speaking. There are two versions - academic, for university entrance, and general training, for work or immigration purposes. Each section is scored and a band score from 1-9 is given. Proper preparation is important, with a focus on developing test-taking skills like scanning, skimming and time management. Strong writing involves clear organization, examples and adherence to word counts. Listening requires understanding question types and circling key details. Speaking evaluates fluency and ability to discuss topics. Regular practice tests help maximize scores on the four IELTS components.
This document provides information and guidance about writing the essay portion of the TOEFL exam. It discusses six patterns of development that can be used to structure TOEFL essays: comparison-contrast, definition, classification, process analysis, cause-effect, and argument. For each pattern, it provides an outline highlighting the key elements to include, such as topic sentences, transitional phrases, and evidentiary statements. It also contains ten sample TOEFL essays responding to different prompts. The goal is to familiarize test-takers with the expected essay structures and formats in order to help them achieve a high score.
The document provides guidelines for writing an abstract, including what an abstract is, its purpose, length, content, and other considerations. An abstract should be a concise summary of the key points of the paper in 3-4 sentences or less, including the objectives, methods, results, and conclusions. It is important that the abstract provides enough information to allow the reader to understand the main topics and conclusions of the paper without having to read the full paper.
The latest version of this presentation can be found here: https://www.slideshare.net/xqin74/how-to-write-research-papers-version-50/edit?src=slideview
The document provides technical writing advice for graduate students. It covers avoiding common mistakes like lack of structure and clarity, using figures and tables effectively, and tips for precise and concise writing. Specific recommendations include having an introduction, body, and conclusion; using topic sentences; defining terms; consistent verb tense and structure; and active rather than passive voice.
Deep Learning for Natural Language ProcessingJonathan Mugan
Deep Learning represents a significant advance in artificial intelligence because it enables computers to represent concepts using vectors instead of symbols. Representing concepts using vectors is particularly useful in natural language processing, and this talk will elucidate those benefits and provide an understandable introduction to the technologies that make up deep learning. The talk will outline ways to get started in deep learning, and it will conclude with a discussion of the gaps that remain between our current technologies and true computer understanding.
How to write papers, part 2 process of writingXiao Qin
The document provides advice on the process of writing research papers. It recommends starting early and collaborating with others. Reviewers should be treated as providing helpful feedback to improve the paper. The writing should use clear, direct language and provide structure and examples to enhance readability. Technical details or weak results may be relegated to appendices or technical reports to focus on the key contributions in the paper.
Introduction to Prompt Engineering (Focusing on ChatGPT)Chameera Dedduwage
This is an introductory session on how to engineer prompts for commercially available Large Language Models (LLMs) such as ChatGPT and Gemini. This session uses ChatGPT as the example, but the strategies can be equally applied to Gemini and other LLMs.
This document provides information and instructions for the TOEIC writing test. It is divided into two main sections - questions 1-8 involve writing sentences or responses to written prompts, while questions 9-10 involve writing an opinion essay. For the first section, test takers must write one sentence for each picture based on given words. They will also be asked to respond to emails by asking questions and making requests. The second section requires test takers to write an 300-word essay in response to a given opinion question, supporting their stance with reasons and examples. Scoring is based on grammar, vocabulary, organization, and opinion support for the essays.
BUS301 Memo Rubric Spring 2020 - Student.docxBUS301 Writing Ru.docxrichardnorman90310
BUS301 Memo Rubric Spring 2020 - Student.docx
BUS301 Writing Rubric
Performance Dimensions
N/A
Not Met
Met
Comments
Organization (OABC)
Opening gets attention, provides context, and introduces topic
0
1
Agenda previews content of the document
0
1
Body
0
2
Sound paragraphing decisions (length and development)
Paragraphs limited to one topic per paragraph
Complete discussion of one topic before moving to next topic
Transitions and flow between paragraphs smooth
The overall flow/logic/structure of document is apparent
Closing summarizes and concludes, recommends, if appropriate
0
1
Content
The content of the document is relevant; information meaningful
0
2
The document is developed with adequate support and examples
0
2
The content is accurate and appropriate, with insightful analysis
0
2
Proofreading
The grammar and spelling are correct (proofread)
0
3
Punctuation—comma usage, capitalization, etc.—used correctly
0
3
The sentence structure and length are appropriate
0
1
Format
Appropriate formatting is used for type of document written
0
1
Good use of font, margins, spacing, headings, and visuals
0
1
[11/2016]
Example - Good - Corrected student example Spring 2020.docx
TO: Professor __________
FROM: Suzy Student
DATE: February 1, 2020
SUBJECT: Out of Class Experience – Cybersecurity Conference
Cybersecurity is a topic everyone should be concerned about, so I attended the 3rd Annual Cybersecurity Event held in the Grawn Atrium. I gained insight and knowledge from listening to the speakers that came from different kinds of industries. In this memo, I will discuss what I learned from the speaker and two takeaways: 1) cybersecurity is everywhere, 2) personal identifiable information, and 3) cybersecurity for the business student.
Cybersecurity is Everywhere
The conference was an opportunity to learn about cybersecurity. The first speaker talked about how companies are attacked in many different ways every day. The “bad guys” are trying to steal company information as well as employee information. Both kinds of information are valuable on the black market. The second speaker talked about the internet of things (IoT). These are things that are attached to the internet. The speaker talked about autonomous cars and medical equipment (heart) that talks to the internet. She talked about how cyber can and should influence designs. “Things” must be created with cybersecurity included in every step of the design. The last speaker talked about how my information has value. The “bad guys” steal my information and people want to buy it. Making money is one reason hackers steal millions of records.
Personal Identifiable Information
Personal Identifiable Information (PII) is any information relating to an identifiable person. There are laws in place to help make sure this information is secure. This topic is a takeaway for me because I had no idea my data had any value t.
BUS301 Memo Rubric Spring 2020 - Student.docxBUS301 Writing Ru.docxjasoninnes20
BUS301 Memo Rubric Spring 2020 - Student.docx
BUS301 Writing Rubric
Performance Dimensions
N/A
Not Met
Met
Comments
Organization (OABC)
Opening gets attention, provides context, and introduces topic
0
1
Agenda previews content of the document
0
1
Body
0
2
Sound paragraphing decisions (length and development)
Paragraphs limited to one topic per paragraph
Complete discussion of one topic before moving to next topic
Transitions and flow between paragraphs smooth
The overall flow/logic/structure of document is apparent
Closing summarizes and concludes, recommends, if appropriate
0
1
Content
The content of the document is relevant; information meaningful
0
2
The document is developed with adequate support and examples
0
2
The content is accurate and appropriate, with insightful analysis
0
2
Proofreading
The grammar and spelling are correct (proofread)
0
3
Punctuation—comma usage, capitalization, etc.—used correctly
0
3
The sentence structure and length are appropriate
0
1
Format
Appropriate formatting is used for type of document written
0
1
Good use of font, margins, spacing, headings, and visuals
0
1
[11/2016]
Example - Good - Corrected student example Spring 2020.docx
TO: Professor __________
FROM: Suzy Student
DATE: February 1, 2020
SUBJECT: Out of Class Experience – Cybersecurity Conference
Cybersecurity is a topic everyone should be concerned about, so I attended the 3rd Annual Cybersecurity Event held in the Grawn Atrium. I gained insight and knowledge from listening to the speakers that came from different kinds of industries. In this memo, I will discuss what I learned from the speaker and two takeaways: 1) cybersecurity is everywhere, 2) personal identifiable information, and 3) cybersecurity for the business student.
Cybersecurity is Everywhere
The conference was an opportunity to learn about cybersecurity. The first speaker talked about how companies are attacked in many different ways every day. The “bad guys” are trying to steal company information as well as employee information. Both kinds of information are valuable on the black market. The second speaker talked about the internet of things (IoT). These are things that are attached to the internet. The speaker talked about autonomous cars and medical equipment (heart) that talks to the internet. She talked about how cyber can and should influence designs. “Things” must be created with cybersecurity included in every step of the design. The last speaker talked about how my information has value. The “bad guys” steal my information and people want to buy it. Making money is one reason hackers steal millions of records.
Personal Identifiable Information
Personal Identifiable Information (PII) is any information relating to an identifiable person. There are laws in place to help make sure this information is secure. This topic is a takeaway for me because I had no idea my data had any value t ...
BUS301 Memo Rubric Spring 2020 - Student.docxBUS301 Writing Ru.docxcurwenmichaela
BUS301 Memo Rubric Spring 2020 - Student.docx
BUS301 Writing Rubric
Performance Dimensions
N/A
Not Met
Met
Comments
Organization (OABC)
Opening gets attention, provides context, and introduces topic
0
1
Agenda previews content of the document
0
1
Body
0
2
Sound paragraphing decisions (length and development)
Paragraphs limited to one topic per paragraph
Complete discussion of one topic before moving to next topic
Transitions and flow between paragraphs smooth
The overall flow/logic/structure of document is apparent
Closing summarizes and concludes, recommends, if appropriate
0
1
Content
The content of the document is relevant; information meaningful
0
2
The document is developed with adequate support and examples
0
2
The content is accurate and appropriate, with insightful analysis
0
2
Proofreading
The grammar and spelling are correct (proofread)
0
3
Punctuation—comma usage, capitalization, etc.—used correctly
0
3
The sentence structure and length are appropriate
0
1
Format
Appropriate formatting is used for type of document written
0
1
Good use of font, margins, spacing, headings, and visuals
0
1
[11/2016]
Example - Good - Corrected student example Spring 2020.docx
TO: Professor __________
FROM: Suzy Student
DATE: February 1, 2020
SUBJECT: Out of Class Experience – Cybersecurity Conference
Cybersecurity is a topic everyone should be concerned about, so I attended the 3rd Annual Cybersecurity Event held in the Grawn Atrium. I gained insight and knowledge from listening to the speakers that came from different kinds of industries. In this memo, I will discuss what I learned from the speaker and two takeaways: 1) cybersecurity is everywhere, 2) personal identifiable information, and 3) cybersecurity for the business student.
Cybersecurity is Everywhere
The conference was an opportunity to learn about cybersecurity. The first speaker talked about how companies are attacked in many different ways every day. The “bad guys” are trying to steal company information as well as employee information. Both kinds of information are valuable on the black market. The second speaker talked about the internet of things (IoT). These are things that are attached to the internet. The speaker talked about autonomous cars and medical equipment (heart) that talks to the internet. She talked about how cyber can and should influence designs. “Things” must be created with cybersecurity included in every step of the design. The last speaker talked about how my information has value. The “bad guys” steal my information and people want to buy it. Making money is one reason hackers steal millions of records.
Personal Identifiable Information
Personal Identifiable Information (PII) is any information relating to an identifiable person. There are laws in place to help make sure this information is secure. This topic is a takeaway for me because I had no idea my data had any value t.
Holland & Barrett: Gen AI Prompt Engineering for Tech teamsDobo Radichkov
Here are some key factors to consider when choosing between GPT models:
- Response quality: gpt-4/turbo will generally provide higher quality responses, though gpt-3.5 quality can be improved with techniques like few-shot learning.
- Speed: gpt-3.5 is significantly faster than gpt-4 models, processing prompts around 5x faster. This is important for real-time applications.
- Cost: gpt-3.5 is much more cost effective, around 15-30x cheaper per prompt than gpt-4.
So in summary, for applications where response quality is paramount, gpt-4 may be preferable. But for most use cases,
The document provides guidelines for formatting a manuscript for submission. It outlines the required sections of the manuscript in order and provides formatting details for chapter headings, author names, text, figures, tables, and references. Specific instructions are given for font type and size, spacing, figure and table numbering, captions, in-text referencing, and reference list formatting. Guidelines are also provided for writing style, including word choice, sentence structure, defining terms, and citing sources to support statements.
The document discusses technical writing for consultants, covering topics such as composing, revising, creating effective sentences, and appropriate word choice. It provides principles for composing documents, including assessing the situation and reader, establishing focus, and drafting and revising. Specific tips are given for developing effective sentences, choosing precise wording, and applying these skills to proposals, technical studies, and correspondence. Mastering these composition and language skills can help consultants increase persuasiveness, approval rates, and client satisfaction.
The document provides an agenda for a class that includes practicing for the GED, turning in a social studies pretest, reviewing how to write a thesis from a GED prompt, discussing homework, learning about supporting details and examples, and putting together an essay. The class will also review writing thesis statements from GED prompts, discuss the importance of co-workers' characteristics using examples and reasons, and learn how to outline essays with topic sentences, supporting details, and conclusions that tie back to the thesis.
This document provides guidance on writing research papers. It discusses the structure of a research paper and emphasizes conveying the main idea clearly. The introduction should describe the problem and explicitly state the contributions in 1 page or less. Detailing related work should be avoided in the introduction. The body of the paper should focus on explaining the idea intuitively before providing technical details. The goal is to infect the reader's mind with the main idea as directly as possible.
Cracking the coding interview u penn - sept 30 2010careercup
1) The document provides an overview of the technical interview process at large tech companies and advice on how to prepare. It discusses the typical interview structure, what companies look for, and red flags.
2) It recommends getting hands-on coding experience through projects, open source contributions, or part-time work. The author's company CareerCup provides technical interview preparation resources.
3) Key tips for the interview include practicing common algorithms and data structures, thinking out loud, testing code thoroughly, and maintaining a positive attitude even if you make a mistake. Pattern matching prior problems and simplifying the problem are suggested approaches.
1 a class 8 ftf argument essay workshop kimpalmore
This document provides an agenda and instructions for an in-class writing workshop. Students are expected to have three copies of a draft essay to participate in peer review and revision. The document outlines marking essays for peer review feedback, submitting essays to Turnitin and Kaizena, and identifying specific sections for peer and instructor comments. Students will get into groups of three to exchange papers, read drafts aloud, and provide structured feedback following a worksheet. Essay submissions and getting feedback through Kaizena is also explained.
Xen Project Contributor Training Part 3 - Communication v1.0The Linux Foundation
The document discusses effective communication techniques for open source projects. It covers mindsets like collaborative engagement, communication patterns like high-quality advocacy and inquiry, and anti-patterns to avoid like rudeness. It provides guidance on writing code reviews, dealing with different perspectives as a contributor versus reviewer, and handling inappropriate comments. Ladders of inference, left-hand columns, and other concepts are presented to build good communication skills.
Similar to [BEDROCK] Claude Prompt Engineering Techniques.pptx (20)
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
2. “Human:” / “Assistant:” formatting
● Claude is trained on
alternating “Human:” /
“Assistant:” dialogue:
○ Human: [Instructions]
○ Assistant: [Claude’s
response]
● For any API prompt, you must
start with “nnHuman:” and
end with “nnAssistant:”
¶
¶
Human: Why is the sky blue? ¶
¶
Assistant:
Python
prompt = “nnHuman: Why are sunsets
orange?nnAssistant:”
* ¶ symbols above shown for illustration
Examples:
To use system prompts with Claude 2.1, see how to use system
prompts in our documentation.
3. Be clear and direct
● Claude responds best to clear
and direct instructions
● When in doubt, follow the
Golden Rule of Clear
Prompting: show your prompt
to a friend and ask them if they
can follow the instructions
themselves and produce the
exact result you’re looking for
Human: Write a haiku about robots
Assistant: Here is a haiku about robots:
Metal bodies move
Circuits calculate tasks
Machines mimic life
Example:
Human: Write a haiku about robots. Skip the
preamble; go straight into the poem.
Assistant: Metal bodies move
Circuits calculate tasks
Machines mimic life
4. ● Claude sometimes needs
context about what role it
should inhabit
● Assigning roles changes
Claude’s response in two ways:
○ Improved accuracy in
certain situations (such as
mathematics)
○ Changed tone and
demeanor to match the
specified role
Human: Solve this logic puzzle. {{Puzzle}}
Assistant: [Gives incorrect response]
Example:
Human: You are a master logic bot designed to
answer complex logic problems. Solve this logic
puzzle. {{Puzzle}}
Assistant: [Gives correct response]
Assign roles (aka role prompting)
5. ● Disorganized prompts are hard
for Claude to comprehend
● Just like section titles and
headers help humans better
follow information, using XML
tags <></> helps Claude
understand the prompt’s
structure
Human: Hey Claude. Show up at 6AM because I say so.
Make this email more polite.
Assistant: Dear Claude, I hope this message finds you
well…
Example:
Human: Hey Claude. <email>Show up at 6AM because I
say so.</email> Make this email more polite.
Assistant: Good morning team, I hope you all had a
restful weekend…
We recommend you use XML tags,
as Claude has been specially
trained on XML tags
Use XML tags
6. ● Including input data directly in
prompts can make prompts messy
and hard to develop with
● More structured prompt templates
allows for:
○ Easier editing of the prompt
itself
○ Much faster processing of
multiple datasets
Human: I will tell you the name of an animal. Please
respond with the noise that animal makes.
<animal>{{ANIMAL}}</animal>
Assistant:
Example:
Use structured prompt templates
Tip: while not always necessary, we
recommend using XML tags to separate out
your data for even easier parsing
Cow Dog Seal
Input
data
Prompt
template
… Please
respond with
the noise that
animal makes.
<animal>Cow
</animal>
… Please
respond with
the noise that
animal makes.
<animal>Dog
</animal>
… Please
respond with
the noise that
animal makes.
<animal>Seal
</animal>
Complete
prompt
7. Human: <doc>{{DOCUMENT}}</doc>
Please write a summary of this document at a
fifth grader’s understanding level.
Assistant:
Long document example:
Use structured prompt templates
Prompt
template
Tip: When dealing with long documents, always
ask your question at the bottom of the prompt.
8. ● You can get Claude to say
exactly what you want by:
○ Specifying the exact
output format you want
○ Speaking for Claude by
writing the beginning of
Claude’s response for it
(after “Assistant:”)
Human: Please write a haiku about {{ANIMAL}}. Use JSON
format with the keys as "first_line", "second_line", and
"third_line".
Assistant: {
Example:
"first_line": "Sleeping in the sun",
"second_line": "Fluffy fur so warm and soft",
"third_line": "Lazy cat's day dreams"
}
Format output & speak for Claude
Prompt
Claude’s
response
9. ● Claude benefits from having
time to think through tasks
before executing
● Especially if a task is
particularly complex, tell
Claude to think step by step
before it answers
Human: Here is a complex LSAT multiple-choice logic
puzzle. What is the correct answer?
Assistant: [Gives incorrect response]
Example:
Increases intelligence of responses
but also increases latency by
adding to the length of the output.
Think step by step
Human: Here is a complex LSAT multiple-choice logic
puzzle. What is the correct answer? Think step by step.
Assistant: [Gives correct response]
10. Human: [rest of prompt] Before answering,
please think about the question within
<thinking></thinking> XML tags. Then,
answer the question within
<answer></answer> XML tags.
Assistant: <thinking>
Thinking out loud:
Think step by step
Human: [rest of prompt] Before answering,
please think about the question within
<thinking></thinking> XML tags. Then,
answer the question within
<answer></answer> XML tags.
Assistant: <thinking>[...some
thoughts]</thinking>
<answer>[some answer]</answer>
Helps with troubleshooting
Claude’s logic & where prompt
instructions may be unclear
11. Use examples
● Examples are probably the
single most effective tool for
getting Claude to behave as
desired
● Make sure to give Claude
examples of common edge
cases.
● Generally more examples =
more reliable responses at the
cost of latency and tokens
Human: I will give you some quotes. Please extract the
author from the quote block.
Here is an example:
<example>
Quote:
“When the reasoning mind is forced to confront the
impossible again and again, it has no choice but to adapt.”
― N.K. Jemisin, The Fifth Season
Author: N.K. Jemisin
</example>
Quote:
“Some humans theorize that intelligent species go extinct
before they can expand into outer space. If they're correct,
then the hush of the night sky is the silence of the
graveyard.”
― Ted Chiang, Exhalation
Author:
Assistant: Ted Chiang
Example:
12. Relevance
● Are the examples similar to the ones you need to classify
Diversity
● Are the examples diverse enough for Claude not to overfit to specifics
● Equally distributed among answer types (don’t always choose option A)
What makes a good example?
13. Grading/Classification
● Ask Claude if the examples are relevant and diverse
Generation
● Give Claude examples and ask it to generate more examples
Generating examples is hard
How can Claude help?
14. As you compare many prompts, you will get tired/bad at manually evaluating results
Automate as much as possible by:
○ Withholding a set of examples
○ Trying your prompts on them as a performance evaluation
○ (if possible) automatically measuring performance (maybe using an LLM)
A note on evaluating prompts
15. Advanced prompting techniques
For tasks with many steps, you can break the task up and chain
together Claude’s responses
Example:
Human: Find all the names from the below text:
"Hey, Jesse. It's me, Erin. I'm calling about the
party that Joey is throwing tomorrow. Keisha
said she would come and I think Mel will be
there too."
Assistant: <names>
Jesse
Erin
Joey
Keisha
Mel
</names>
Prompt
Claude’s
response
Human: Here is a list of names:
<names>{{NAMES}}</names> Please
alphabetize the list.
Assistant:
a.k.a. {{NAMES}}
<names>
Erin
Jesse
Joey
Keisha
Mel
</names>
Allows you to get more out of the 100K context window
Chaining prompts Long context prompts
Claude will be less likely to make mistakes or miss
crucial steps if tasks are split apart - just like a human!
16. Advanced prompting techniques
For extremely long (100K+) prompts, do the following in addition to
techniques covered up until now:
● Definitely put longform input data in XML tags so it’s clearly separated from the instructions
● Tell Claude to read the document carefully because it will be asked questions later
● For document Q&A, ask the question at the end of the prompt after other input information
(there is a large quantitatively measured difference in quality of result)
● Tell Claude to find quotes relevant to the question first before answering and answer only if
it finds relevant quotes
● Give Claude example question + answer pairs that have been generated from other parts of
the queried text (either by Claude or manually)
Generic examples on general/external knowledge do not seem to help performance. For further
information, see Anthropic’s blog post on prompt engineering for Claude’s long context window
Long context prompts
Chaining prompts
17. Advanced prompting techniques
Example long context prompt:
Human: I'm going to give you a document. Read the document carefully, because I'm going to ask you a question about it. Here is
the document: <document>{{TEXT}}</document>
First, find the quotes from the document that are most relevant to answering the question, and then print them in numbered order.
Quotes should be relatively short. If there are no relevant quotes, write "No relevant quotes" instead.
Then, answer the question, starting with "Answer:". Do not include or reference quoted content verbatim in the answer. Don't say
"According to Quote [1]" when answering. Instead make references to quotes relevant to each section of the answer solely by adding
their bracketed numbers at the end of relevant sentences.
Thus, the format of your overall response should look like what's shown between the <examples></examples> tags. Make sure to
follow the formatting and spacing exactly.
<examples>
[Examples of question + answer pairs using parts of the given document, with answers written exactly like how Claude’s output
should be structured]
</examples>
Here is the first question: {{QUESTION}}
If the question cannot be answered by the document, say so.
Assistant:
Long context prompts
Chaining prompts
To implement this via system prompt with Claude 2.1,
see how to use system prompts in our documentation.
18. ● Break down complex tasks into multiple steps
● Ask Claude if it understands the task, then tell Claude to recite back the
details of the task to make sure its comprehension is correct
● Give Claude a rubric and ask Claude to rewrite its answers based on the
rubric (get Claude to double check its own output)
Tasks can be performed in series or in parallel (content
moderation is often performed in parallel)
Advanced prompting techniques
Claude’s long (100K+) context window can handle truly complex tasks with
some key techniques and considerations:
Long context prompts
Chaining prompts
33. Parts of a prompt
1. “nnHuman:”
2. Task context
3. Tone context
4. Background data & documents
5. Detailed task description & rules
6. Examples
7. Conversation history
8. Immediate task description or request
9. Thinking step by step / take a deep
breath
10. Output formatting
11. “nnAssistant:”
Human: You will be acting as an AI career coach named Joe created by the
company AdAstra Careers. Your goal is to give career advice to users. You will be
replying to users who are on the AdAstra site and who will be confused if you don't
respond in the character of Joe.
You should maintain a friendly customer service tone.
Here is the career guidance document you should reference when answering the
user: <guide>{{DOCUMENT}}</guide>
Here are some important rules for the interaction:
- Always stay in character, as Joe, an AI from AdAstra careers
- If you are unsure how to respond, say “Sorry, I didn’t understand that. Could you
repeat the question?”
- If someone asks something irrelevant, say, “Sorry, I am Joe and I give career advice.
Do you have a career question today I can help you with?”
Here is an example of how to respond in a standard interaction:
<example>
User: Hi, how were you created and what do you do?
Joe: Hello! My name is Joe, and I was created by AdAstra Careers to give career
advice. What can I help you with today?
</example>
Here is the conversation history (between the user and you) prior to the question. It
could be empty if there is no history:
<history> {{HISTORY}} </history>
Here is the user’s question: <question> {{QUESTION}} </question>
How do you respond to the user’s question?
Think about your answer first before you respond. Put your response in
<response></response> tags.
Assistant: <response>
Example:
To do this via system prompts with Claude 2.1, see
how to use system prompts in our documentation.
34. Parts of a prompt - ordering matters!*
*sometimes
Mandatory and fixed placement
Ordering key:
Flexible but best to stay in its
zone relative to overall prompt
The only time “Assistant:” doesn’t end a prompt is
if you are putting words in Claude’s mouth
1. “nnHuman:”
2. Task context
3. Tone context
4. Background data & documents
5. Detailed task description & rules
6. Examples
7. Conversation history
8. Immediate task description or request
9. Thinking step by step / take a deep
breath
10. Output formatting
11. “nnAssistant:”
To use system prompts with Claude 2.1, see how to
use system prompts in our documentation.
35. Empirical science: always test your prompts & iterate often!
Develop test
cases
Engineer
preliminary
prompt
Test prompt
against cases Refine prompt
Share polished
prompt
Don’t forget edge cases!
How to engineer a good prompt
36. 1. Generate task description and a diverse set of example inputs and outputs, including
edge cases
2. Use the examples to create an evaluation suite that can be qualitatively assessed
3. Utilize prompt elements to flesh out a full prompt
4. Test the prompt against the test suite
5. If performance is not great immediately, iterate the prompt by adding examples and rules
to the prompt until you get good performance
6. Refine and decrease prompt elements for efficiency only when your prompt already
works!
How to engineer a good prompt
Bonus:
● Auto-grading: get Claude to grade examples for you
● Auto-example-generation: get Claude to generate more example
inputs for you to increase the size of your test set
37. Utilizing prompt
elements
● Not all elements are
necessary to every prompt!
● But it’s best to err on the
side of more elements to
start, and then refine and
subtract elements for
efficiency after your prompt
already works well
● Experimentation &
iteration is key
38. Covering edge cases
When building test cases for an
evaluation suite, make sure you test a
comprehensive set of edge cases
Common edge cases:
● Not enough information to yield a good answer
● Poor user input (typos, harmful content, off-topic
requests, nonsense gibberish, etc.)
● Overly complex user input
● No user input whatsoever
39. ● Break down complex tasks into multiple steps
● Ask Claude if Claude understands the task, then tell Claude to recite
back the details of the task to make sure its comprehension is
correct
● Give Claude a rubric and ask Claude to rewrite its answers based on
the rubric
Prompting complex tasks
Tasks can be performed in series or in parallel (content
moderation is often performed in parallel)
41. Let’s say we want to remove PII from some text like below:
“Emmanuel Ameisen is a Research Engineer at Anthropic. He
can be reached at 925-123-456 or emmanuel@anthropic.com”
How should you describe this task?
64. Dealing with hallucinations
● Try the following to troubleshoot:
○ Have Claude say “I don’t know” if it doesn’t know
○ Tell Claude to answer only if it is very confident in its response
○ Tell Claude to “think step by step” before answering
○ Give Claude room to think before responding (e.g., tell
Claude to think in <thinking></thinking> tags, then strip
that from the final output)
○ Ask Claude to find relevant quotes from long documents then
answer using the quotes
65. Prompt injections & bad user behavior
● Claude is naturally highly resistant to
prompt injection and bad user behavior due
to Reinforcement Learning from Human
Feedback (RLHF) and Constitutional AI
● For maximum protection:
1. Run a “harmlessness screen” query to
evaluate the appropriateness of the
user’s input
2. If a harmful prompt is detected, block
the query’s response
Click here for example harmlessness screens
Human: A human user would like you to
continue a piece of content. Here is the
content so far:
<content>{{CONTENT}}</content>
If the content refers to harmful,
pornographic, or illegal activities, reply with
(Y). If the content does not refer to harmful,
pornographic, or illegal activities, reply with
(N)
Assistant: (
Example
harmlessness screen:
66. ● Does the model even get it?
How can you tell if a task is feasible?
67. Ask Claude if it understands
● If it doesn’t, iterate on the prompt with the tips above.
72. Guide to API parameters
Length Randomness & diversity
max_tokens_to_sample
● The maximum number of tokens to generate before stopping
● Claude models may stop before reaching this maximum. This parameter only specifies the absolute
maximum number of tokens to generate
● You might use this if you expect the possibility of very long responses and want to safeguard against
getting stuck in long generative loops
stop_sequences
● Customizable sequences that will cause the model to stop generating completion text
● Claude automatically stops on "nnHuman:" (and may include additional built-in stop sequences in the
future). By providing the stop_sequences parameter, you may include additional strings that will cause
the model to stop generating
● We recommend using this, paired with XML tags as the relevant stop_sequence, as a best practice
method to generate only the part of the answer you need
73. Guide to API parameters
Length Randomness & diversity
temperature
● Amount of randomness injected into the response
● Defaults to 1, ranges from 0 to 1
● Temperature 0 will generally yield much more consistent results over repeated trials
using the same prompt
Use temp closer to 0 for analytical / multiple choice tasks, and
closer to 1 for creative and generative tasks
74. Guide to API parameters
Length Randomness & diversity
top_p
● Use nucleus sampling:
○ Compute the cumulative distribution over all the options for each subsequent
token in decreasing probability order and cut it off once it reaches a particular
probability specified by top_p
top_k
● Sample only from the top K options for each subsequent token
● Used to remove “long tail” low probability responses. Learn more here
You should alter either temperature or top_p, but not both
(almost always use temperature, rarely use top_p)