TRADITIONAL SOFTWARE responds predictably to instructions. “Generative” artificial intelligence (AI) models, such as the one used by ChatGPT, are different: they respond to requests written in everyday language and can produce surprising results. At first glance, writing effective prompts for artificial intelligence is much easier than, for example, mastering a programming language. However, as the capabilities of AI models have increased, getting the most out of the algorithms contained in these black boxes has become more arduous. “Rapid engineering,” as the skill is called, has been compared to directing a dancing partner or poking a beast to see how it responds. What is it about?
To start, a good pitch should include clear instructions: for example, make a list of the potential downsides of a given policy proposal or write a genial marketing email. Ideally, the prompt should encourage the model to do intricate reasoning: telling it to “think one step at a time” often significantly improves performance. Dividing instructions into a logical sequence of separate tasks works similarly. For example, to encourage a clear explanation of a scientific concept, you can ask the AI to explain and then define critical terms used in the explanation. This “chain of thought” technique can also reveal something about what’s going on inside the model.
AI users need to see these details. Since huge models are trained on what one engineer calls “everything from everywhere,” it helps to include authoritative text in the tooltips, direct the model to prioritize specific sources, or at least tell the model to list its sources. Many models offer settings for ” temperatures”, the escalate of which increases the randomness of the results. This can be fine for inventive tasks such as writing fiction, but it tends to escalate the incidence of factual errors.
Asking the AI to role play can also be helpful. To create the advertising copy, Berlin-based marketing agency Crispy Content has the model rewrite and then defend the sample from the point of view of the sales director, head of marketing and “inventive”. The best idea then is to defend yourself. modified by staff This “persona” approach leads to responses that seem more human, says Bilyal Mestanov of the Promptly Engineering agency in Bulgaria.
Requiring models to behave like humans raises the issue of AI etiquette. Some say that a prompt that includes the word “please” can lead the model to the source material and, therefore, get a response written in an equally polite tone. “Thank you” in response to a helpful response may suggest to the model that it is on the right track. But tripping over yourself to thank a model too much can muddy the prompts and divert some processing power the wrong way, says Josh Hewett of British marketing agency Discoverable.
Good suggestions are valuable. Crispy Content develops templates that instruct models to write 1,000-word articles for their clients. Users enter keywords (“red wines from Andalusia, Spain”) and the desired tone. It takes about 25,000 euros ($27,000) of man-hours to develop one of these templates, says Gerrit Grunert, the company’s managing director, and the results must be reviewed by an editor. However, where Crispy Content cost around €400 for each human-authored article, those generated with suggestions cost around €4 each.
Tutoring agencies and online courses designed to teach a skill are booming. High-speed engineering jobs started appearing in tardy 2022 and are becoming more common. Popular candidates are graduates with a degree in languages or humanities. Advances in artificial intelligence could eventually make such jobs obsolete as models learn to better anticipate user needs. But for now, it looks like AI whisperers will enjoy the upper hand.
© 2023, The Economist Newspaper Narrow. All rights reserved.
From The Economist, published under license. Original content can be found at www.economist.com