Examples of Prompts
前一节介绍了大语言模型提示词的基本示例。
本节将提供更多示例,展示如何使用提示词来完成不同任务,并在此过程中介绍关键概念。通常,通过示例学习概念是最好的方式。下面的几个示例说明了如何使用精心设计的提示词来执行不同类型的任务。
主题:
如何改变TA的写作风格?
适用场景:文案写作,需要根据主题内容调整风格;自建机器人,需要自定义性格
语气:正式的语气、友好的语气、严肃的语气 、幽默的语气、激励的语气、批判的语气 、冷静的语气
风格:故事叙事式、新闻报道式、说服议论式、日常会话式、学术官方式、幽默反讽式、散文随笔式
创意: Temperature=X(值越大创造性的产出越大)
提示词:
请以一位拥有卓越中文能力的内容创作者的身份回应。请创作一篇1500字的完全原创、由人工创作的中文文章,与提示中描述的主题相关。确保文章是真实的,避免直接引用其他参考资料。在创作时,注重复杂性和自发性,同时在不损害相关性或背景的情况下达到最佳水平。构建富有吸引力的段落,就好像是由人类撰写的一样(保持简单、吸引受众、使用主动语态、保持简洁、提出修辞问题、融入比较和意象)。不要使用“首先”和“其次”,可以尝试使用其他过渡词或短语来替换。不要重复我的请求。不要道歉或提及自己。标题加粗。现在,请撰写下面的主题的文章:生成式AI如何改变日常生活?
请使用幽默的语气和日常会话式的写作风格,用中文写作。
输出:
当机器人成了家里的新宠:生成式AI如何颠覆我们的日常生活
在这个科技迅速发展的时代,如果你还没听说过生成式AI,那你可能要被时代抛弃了。好吧,这句话可能有点夸张,但不得不承认,生成式AI确实在悄无声息中改变了我们的生活,就像那个突然闯入家门的不速之客,一不小心就成了家里的新宠。
想象一下,早上起床,还没来得及揉揉惺忪的睡眼,你的个人AI助手已经为你准备好了今天的日程、天气预报,甚至还根据你的口味和健康数据定制了早餐建议。是不是有点像科幻电影里的情节?但这一切,在生成式AI的加持下,已经悄然成为现实。
生成式AI,顾名思义,它擅长生成内容,从文字到图片,再到音乐和视频,几乎没有它做不到的。而我们,正享受着它带来的种种便利和乐趣。
在工作中,它是你的得力助手
还记得那些年我们为了写一篇报告、做一次演讲苦恼的日子吗?现在,有了生成式AI,只需要给出一个大致的框架或主题,一杯咖啡的时间,一篇结构完整、观点明确的文章就呈现在你面前。而且,它还能根据你的要求调整文风和语气,让人不禁感叹,这简直就是懒人救星啊!
在生活中,它是你的贴心小伙伴
如果你认为生成式AI只能在工作中大放异彩,那就大错特错了。它同样可以在日常生活中扮演重要角色。比如,你可以让它根据你的喜好和当地的食材,每天为你规划不重样的菜单。或者,在你感到孤独时,它还能化身为一个聊天伙伴,陪你聊天,让你笑,甚至还能给你提供心理咨询服务呢。
在学习上,它是你的私人导师
对于学生党来说,生成式AI简直就是天赐良机。不懂的题目?没关系,AI帮你解答。需要复习资料?AI帮你整理。想要学习新知识?AI为你定制学习计划。它就像一个无所不知、无所不能的超级导师,让学习变得更加高效和有趣。
然而,一切都是那么美好吗?
正当我们沉浸在生成式AI带来的种种便利和乐趣时,也有人开始担忧,这样的技术发展是否会让人类变得越来越依赖机器,甚至失去一些基本的生活技能和创造力?
确实,任何技术的发展都是一把双刃剑。生成式AI让我们的生活变得更加便捷,同时也提醒我们,需要在享受科技成果的同时,保持对于技术合理使用的警惕,不失去那份对于生活的热爱和对于知识的追求。
总而言之,生成式AI的出现无疑给我们的日常生活带来了巨大的变化,既有着无限的可能性,也伴随着一定的挑战。但不管怎样,它都已经成为我们生活中不可或缺的一部分,就像家里新来的那位不速之客,虽然一开始让人措手不及,但慢慢地,我们却发现,生活因它而更加精彩。
如何快速提取信息?
适用场景:需要从大量新闻、业务报告、财报等非结构化数据中,提取关键数据
提示词:
A公司业务报告总览:展示卓越增长
在金融科技领域,第三方支付公司的竞争日益激烈。作为该领域的一个活跃参与者,A公司在最近的一次业务报告演示中,显示了它在过去财年中的显著增长。A公司的CEO在这次业务报告中分享了一系列鼓舞人心的财务数据和业务里程碑,突显了公司的健康增长势头和行业领导地位。
A公司作为一家创新型第三方支付平台,自成立以来一直致力于提供安全、快捷、便利的支付解决方案。公司利用先进的技术,包括移动支付、在线交易处理和安全支付技术,来改善和简化用户的支付体验。作为金融科技的领导者,A一直在不断扩大其市场影响力,吸引了大量企业和个人用户。公司CEO在报告中指出,过去的一年对A公司来说是收获的一年。公司营收增长了15%,达到了5000万美元。这一增长主要得益于产品创新和市场扩展策略的成功实施,使得公司能够吸引更多的新客户,同时保持现有客户的忠诚度。更为显著的是,公司的利润率达到了12%,净利润高达600万美元。这一利润增长不仅来自于营收的增加,还体现了公司对成本控制和运营效率的持续关注。通过优化运营流程和投资自动化技术,公司得以在提高服务质量的同时,控制并降低不必要的开支。
在客户增长方面,A公司同样展示出强劲的势头。报告显示,公司的客户基础增长了20%,现在总共拥有10万名客户。这一增长得益于公司在提升用户体验、开发新服务和扩展国际市场方面的努力。A通过对用户界面的持续优化和对多种支付方式的支持,使得更多用户倾向于选择其作为主要的支付服务提供商。此外,A在全球范围内扩大了其业务和服务。公司不仅在本土市场加强了其地位,还通过战略合作和国际化努力,进入新的海外市场。这不仅增加了公司的国际影响力,也为公司带来了更多元化的收入来源。
在人力资源方面,A公司的员工数量增长了25%,达到了500人。新加入的员工主要集中在技术开发、客户服务、市场营销和国际业务扩展等关键领域。这一增长反映了公司对于扩大业务规模、提升产品和服务质量的持续投入。运营成本方面,公司的运营费用增加了10%,达到了1000万美元。增加的费用主要用于支付系统的安全性升级、数据保护措施的加强以及市场推广活动。尽管运营成本有所增加,但这些投资对于保持公司的竞争力和确保长期成功至关重要。展望未来,A公司计划继续沿着技术创新的道路前进,不断提升其产品和服务的质量。公司将重点投资于移动支付技术、大数据分析和人工智能,以进一步优化支付流程,提高交易的安全性和效率。此外,公司还计划加强其在全球支付市场的布局,寻求更多的国际合作伙伴,拓展新兴市场。在市场营销方面,A将通过多元化的营销策略和品牌推广活动,进一步提高其品牌知名度和市场占有率。公司还将继续深化与现有客户的关系,提供更加个性化和定制化的服务,以增强客户满意度和忠诚度。
A公司的业务报告展示了其在第三方支付领域的强劲增长和不断扩大的市场影响力。通过持续的创新、有效的市场扩展和精明的运营策略,A公司已经建立起强大的品牌和稳固的市场地位。展望未来,A公司将继续坚持其增长战略,不断寻求新的增长点和机遇,以实现持续的业务扩展和财务成功。
根据以上文章内容,生成仅包含具体数据信息的表格:
输出:
根据文章内容,以下是A公司的具体数据信息表格:
| 指标 | 数据 |
|------------|------------|
| 营收增长率 | 15% |
| 营收总额 | 5000万美元 |
| 利润率 | 12% |
| 净利润 | 600万美元 |
| 客户基础增长率 | 20% |
| 客户总数 | 10万名 |
| 员工数量增长率 | 25% |
| 员工总数 | 500人 |
| 运营费用增长率 | 10% |
| 运营费用总额 | 1000万美元 |
这个表格总结了A公司在最近财年中的主要财务和业务成就,包括营收、利润、客户基础和员工数量的增长,以及运营费用的变化。
你也可以让大模型帮你总结,比如:根据以上文章内容,生成摘要总结,字数500字以内。
另外,目前大模型大多提供文档上传对话的功能,你可以把文档上传后,再附上你的指令,让大模型阅读文档内容,响应你的任务。
如何辅助代码?
One of the best ways to get the model to respond to specific answers is to improve the format of the prompt. As covered before, a prompt could combine instructions, context, input, and output indicators to get improved results. While these components are not required, it becomes a good practice as the more specific you are with instruction, the better results you will get. Below is an example of how this would look following a more structured prompt.
Prompt:
Answer the question based on the context below. Keep the answer short and concise. Respond "Unsure about answer" if not sure about the answer.
Context: Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use.
Question: What was OKT3 originally sourced from?
Answer:
Output:
Mice.
Context obtained from Nature (opens in a new tab).
Text Classification
So far, you have used simple instructions to perform a task. As a prompt engineer, you need to get better at providing better instructions. But that's not all! You will also find that for harder use cases, just providing instructions won't be enough. This is where you need to think more about the context and the different elements you can use in a prompt. Other elements you can provide are input data
or examples
.
Let's try to demonstrate this by providing an example of text classification.
Prompt:
Classify the text into neutral, negative or positive.
Text: I think the food was okay.
Sentiment:
Output:
Neutral
You gave the instruction to classify the text and the model responded with 'Neutral'
, which is correct. Nothing is wrong with this but let's say that what you really need is for the model to give the label in the exact format you want. So instead of Neutral
, you want it to return neutral
. How do you achieve this? There are different ways to do this. You care about specificity here, so the more information you can provide the prompt, the better results. You can try providing examples to specify the correct behavior. Let's try again:
Prompt:
Classify the text into neutral, negative or positive.
Text: I think the vacation is okay.
Sentiment: neutral
Text: I think the food was okay.
Sentiment:
Output:
neutral
Perfect! This time the model returned neutral
which is the specific label you were looking for. It seems that the example provided in the prompt helped the model to be specific in its output.
To highlight why sometimes being specific is important, check out the example below and spot the problem:
Prompt:
Classify the text into nutral, negative or positive.
Text: I think the vacation is okay.
Sentiment:
Output:
Neutral
What is the problem here? As a hint, the made-up nutral
label is completely ignored by the model. Instead, the model outputs Neutral
as it has some bias towards that label. But let's assume that what you really want is nutral
. How would you fix this? Maybe you can try adding descriptions to the labels or add more examples to the prompt? If you are not sure, we will discuss a few ideas in the upcoming sections.
Conversation
Perhaps one of the more interesting things you can achieve with prompt engineering is instructing the LLM system on how to behave, its intent, and its identity. This is particularly useful when you are building conversational systems like customer service chatbots.
For instance, let's create a conversational system that's able to generate more technical and scientific responses to questions. Note how you are explicitly telling it how to behave through the instruction. This is sometimes referred to as role prompting.
Prompt:
The following is a conversation with an AI research assistant. The assistant tone is technical and scientific.
Human: Hello, who are you?
AI: Greeting! I am an AI research assistant. How can I help you today?
Human: Can you tell me about the creation of blackholes?
AI:
Output:
Sure! Black holes are regions of spacetime where the gravitational force is so strong that nothing, not even light, can escape from it. They are created when a very massive star dies and its core collapses in on itself, forming a singularity of infinite density. The intense gravity of the singularity pulls in all the matter and radiation around it, creating the black hole.
Our AI research assistant sounds a bit too technical, right? Okay, let's change this behavior and instruct the system to give more accessible answers.
Prompt:
The following is a conversation with an AI research assistant. The assistant answers should be easy to understand even by primary school students.
Human: Hello, who are you?
AI: Greeting! I am an AI research assistant. How can I help you today?
Human: Can you tell me about the creation of black holes?
AI:
Output:
Sure! Black holes are created when a very massive star runs out of fuel and collapses in on itself. The gravity of the star is so strong that nothing, not even light, can escape its pull. This creates a point in space-time with an intense gravitational force called a black hole.
You made some progress. You can continue improving it. If you add more examples, you might get even better results.
Code Generation
One application where LLMs are quite effective is code generation. Copilot is a great example of this. There are a vast number of code-generation tasks you can perform with clever prompts. Let's look at a few examples below.
First, let's try a simple program that greets the user.
Prompt:
/*
Ask the user for their name and say "Hello"
*/
Output:
let name = prompt("What is your name?");
console.log(`Hello, ${name}!`);
Notice that you didn't even need to specify the language to use.
Let's switch levels a bit. The example below shows how powerful LLMs can be with a little more effort in designing the prompts.
Prompt:
"""
Table departments, columns = [DepartmentId, DepartmentName]
Table students, columns = [DepartmentId, StudentId, StudentName]
Create a MySQL query for all students in the Computer Science Department
"""
Output:
SELECT StudentId, StudentName
FROM students
WHERE DepartmentId IN (SELECT DepartmentId FROM departments WHERE DepartmentName = 'Computer Science');
This is very impressive. In this case, you provided data about the database schema and asked it to generate a valid MySQL query.
Reasoning
Perhaps one of the most difficult tasks for an LLM today is one that requires some form of reasoning. Reasoning is one of most interesting areas due to the types of complex applications that can emerge from LLMs.
There have been some improvements in tasks involving mathematical capabilities. That said, it's important to note that current LLMs struggle to perform reasoning tasks so this requires even more advanced prompt engineering techniques. We will cover these advanced techniques in the next guide. For now, we will cover a few basic examples to show arithmetic capabilities.
Prompt:
What is 9,000 * 9,000?
Output:
81,000,000
Let's try something more difficult.
Prompt:
The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1.
A:
Output
No, the odd numbers in this group add up to an odd number: 119.
That's incorrect! Let's try to improve this by improving the prompt.
Prompt:
The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1.
Solve by breaking the problem into steps. First, identify the odd numbers, add them, and indicate whether the result is odd or even.
Output:
Odd numbers: 15, 5, 13, 7, 1
Sum: 41
41 is an odd number.
Much better, right? By the way, we tried this task a couple of times and the model sometimes fails. If you provide better instructions combined with examples, it might help get more accurate results.
In the upcoming section, we will cover even more advanced prompt engineering concepts and techniques for improving performance on all these and more difficult tasks.
Notebook
If you want to practice with the prompts above using Python, we have prepared a notebook to test some of the prompts using the OpenAI models.