Academy
12 min read

AI Prompt Playbook for UK Grant Teams: 12 Reusable Templates

An AI prompt playbook gives UK grant teams a safe, reusable toolkit for discovery interviews, funder-specific drafts, and audits. This guide shows how to design prompts that respect compliance guardrails, draw on live evidence, and plug straight into Crafty so every application keeps your charity voice intact.

TL;DR

  • Build prompt guardrails first: tone, claims you can’t make, and how to cite evidence.
  • Organise prompts by workflow—discovery, drafting, evidence, budget, and review—to avoid duplication.
  • Share prompts via Crafty so teams get consistent AI outputs with live organisational knowledge.

Why do UK grant teams need prompt guardrails in 2025?

Since the Information Commissioner’s Office updated its AI guidance in October 2024, charities must document how AI outputs stay accurate, fair, and privacy-safe. Prompt guardrails keep your team compliant and ensure the AI never fabricates service stats or commitments. Charity Digital’s 2024 AI adoption survey reported that 62% of UK non-profits paused pilots due to reputational concerns—structured prompts are the antidote.

Treat prompts as living assets with owners, review dates, and red-line phrases (e.g. “guaranteed outcomes”, “100% success”). When everyone uses the same templates, talent onboarding shrinks from weeks to days, and Crafty can reuse your best language safely.

How should you structure an AI prompt playbook?

Categorise prompts by job-to-be-done: discovery, drafting, evidence checks, budgeting, and compliance review. Each prompt needs a purpose statement, required inputs, expected outputs, and escalation rules if AI confidence drops.

Figure 1. AI prompt playbook structure table showing intent, inputs, outputs, and owners.
Prompt IntentInputs RequiredOutput FormatOwner & Review Cycle
Discovery interview synthesisCall transcript, funder criteria, risk notesBullet summary with unmet needs, red flags, next actionsHead of Partnerships, review quarterly
Funder-aligned narrative draftFunder brief, impact metrics, approved tone notes650-word first draft with citations and risk disclaimerSenior Bid Writer, review monthly
Evidence cross-checkData table link, testimonial bank, monitoring logChecklist confirming sources, gaps, consent statusImpact Lead, review bi-monthly
Budget explanationExcel export, assumptions, inflation indicesNarrative justification + sensitivity analysisFinance Manager, review quarterly
Compliance reviewDraft response, funder Ts&Cs, policy libraryLine-by-line risk warnings with mitigation tasksGovernance Officer, review quarterly

Store prompts in Notion or Confluence with tags (e.g. #lottery, #innovation) to support fast filtering. Add a change history log so auditors can track why a prompt changed and which funder feedback drove the update.

What tests prove a prompt is funder-ready?

Run three tests before adding a prompt to the live library: accuracy (are facts verifiable?), compliance (does wording respect funder terms?), and tone (does it read like you?). Use real bids from 2024; anonymise data if needed. The UK Government AI Standards Hub 2024 evaluation checklist is a useful reference for responsible testing.

Capture results in a prompt testing log. If a prompt fails, record why—lack of data, funder nuance, or safety filter. Set a minimum score (e.g. 4/5 for accuracy, 5/5 for compliance) before rollout.

How do you embed prompts inside Crafty workflows?

Crafty lets you pin custom prompt snippets to question templates and discovery forms. Upload the approved prompts, map them to funder personas, and tag them with the readiness scores collected in your grant readiness checklist. That way, teams only use prompts when the underlying data is reliable.

Pair the library with your evidence bank automation so prompts can reference live outcomes, and connect to the responsible AI checklist to log assurance steps.

Implementation checklist

  • Tag prompts by funder family (lottery, government, corporate).
  • Automate evidence retrieval using Crafty’s knowledge base API.
  • Set quarterly reviews; align with your modular answer library governance.

Download the templates and next steps

We’ve packaged the twelve highest-performing prompts into a Notion template and CSV import for Crafty. Duplicate it, assign owners, and log the first review date within 30 days. If you need help adapting prompts for sensitive programmes (youth justice, health data), book a call with our success team.

Next actions

Key takeaways

  • Codify guardrails, inputs, and owners for every prompt before scaling AI use.
  • Test prompts on live bids and log scores to protect tone, accuracy, and compliance.
  • Embed prompts inside Crafty so every draft reuses verified organisational knowledge.

Summary and next steps

Treat your AI prompt playbook like any other critical knowledge asset: governed, versioned, and linked to data quality. Pair it with readiness scoring, evidence automation, and responsible AI controls so everyone trusts the outputs.

  • Stand up an initial library covering discovery, drafting, evidence, budget, and compliance.
  • Review prompts quarterly with governance, impact, and finance stakeholders.
  • Integrate with Crafty to keep prompts context-aware and auditable.

Max Beech, Head of Content

Updated 25 February 2025

[PLACEHOLDER: Expert review by Dr. Michael Chen, CTO & Co-Founder]

QA: Originality ✅ | Fact-check ✅ | Links ✅ | Style guide ✅ | Legal/compliance ✅