Skip to main content
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
We couldn't find anything for that query...

AI Writing at Scale: Best Practices

Author
Yash Tekriwal

💬 Prompting ≠ Prompting at scale

There’s a million resources on learning prompt engineering, but almost none on prompt engineering at scale. When you’re writing one prompt to try and run as accurately as possible for hundreds, or thousands, of rows - the game is a bit different. We’ve put together three best practices for you to use to make your prompts as efficient as possible.

1️⃣ Sequence

When you’re prompting, giving steps in logical, precise order increases clarity and reduces the rate of hallucination (made-up results). By giving AI an order to how you want it to execute, you’re giving it guardrails on how to “think”, instead of letting it run wild.

Here’s an example of an un-sequenced prompt:

  • Write me an email first line to Name using the LinkedIn profile information below: Enrich Person from LinkedIn Profile

Here’s a prompt with some better sequencing:

  • I will give you the full LinkedIn profile for Name below. Write a first line for an email intro that provides context on why I'm reaching out, personalized to the prospect. In order of priority, mention awards, volunteering information, information from the bio, or past experiences. Enrich Person from LinkedIn Profile

2️⃣ Chunk

Chunking is the most counterintuitive principle to most users. We are primed to think that more is better, but as with many things in life, less is more when it comes to prompting at scale.

Many people think that the more context and instructions you give to AI, the better it will perform.

Not necessarily.

Working with AI is a lot like training a new employee. If you give a new employee a 20-page guide on how to write cold emails on Day 1, you’re not setting them up for success. The same is true with AI.

With onboarding new employees, the goal is to be as insights rich and compact as possible. Again, the same is true with AI.

This means removing filler words and fluff, but it goes even further.

💡 If you can communicate the exact same meaning in fewer words, always use the option with fewer words.

That goes as far as using I will provide instead of I am going to give you. Remember that LLMs aren’t true “intelligence”. They’re complex algorithms trained on billions of data points, and using that massive dataset to predict the next most likely words from your prompt. So the less room there is for confusion - the better.

3️⃣ Sample

Whenever you want a narrow range of results, it’s helpful to provide a narrow range of examples as inspiration.

Think about onboarding a new employee again. If you give them a set of principles, but then don’t give them any examples of what great looks like - there’s still a lot of room for error.

But if you give a new employee a set of principles and you give them examples of what great looks like - they have something to reference when thinking about quality.

Once again, AI is exactly the same. Provides examples of the kinds of output you want it to create makes the range of results much more narrow, and tightens the guardrails of what great can look like.

4️⃣ Specify

Last of the prompting principles, but potentially most important, is specify.

When you’re onboarding a new employee, principles and a mission are good enough to get going, but if you don’t also provide a set of rules to not violate, you can’t blame them for trying.

You have to give people a set of “Dos” and “Do Nots” to set them up for maximum success. AI operates in exactly the same way.

When you specify guardrails like “Never use 1st person pronouns” or “Always start with Just read, came across, or Saw”, it takes your instructions into consideration in the final output.

You should always consider specifying:

  • Objective
  • Tone
  • Length of response
  • Format of response

5️⃣ Iterate

The final principle is actually the one to rule them all. Iterate iterate iterate.

Nobody nails the perfect prompt down on the first try. Write your best first draft, run it on 10 rows of data, and look at what you don’t like. Consider the edge cases. Add specificity, chunks, examples, or sequencing to your prompt to improve it. Repeat.

Do this over and over again until you’ve got 10 rows of data returning exactly what you want. Then, run it for the next 10. Repeat.

Once you’ve got a prompt reliable running for 10-20 rows of data, the likelihood that it scales effectively to the next 1000 is far higher.

How was this lesson?

Let us know if there is anything we can do to improve the design, structure, format, you name it!

More on AI Research & Writing

Looking for other lessons? Here are some more videos from this category.

Get started

Choose a category to explore our growing library of learning material.

Clay brand asset shaped as a 3D group of abstract objects made out of purple and pink clayClay brand asset shaped as a 3D group of abstract objects made out of purple and pink clay

Scale your outbound motion in seconds, not months

14 day free Pro trial - No credit card required

Try Clay free