kinchin.pro The expert behind the kitchen
Free & Open-Source · Contracts · Policies · Templates

Stop asking AI to "review this."
Give it a proper recipe.

Not just prompts: recipes, sequences, and a methodology your team can run repeatedly. Everything you need to get the most out of AI on documents. Free. Open-source.

One prompt. 60 seconds. The difference:

What most people type

"Review this contract and let me know if there are any issues."

You get a generic 2,000-word essay. Some of it's useful. Most of it isn't. You don't know where to start. You spend 30 minutes reading it and still aren't sure what to do.

What a recipe produces

RiskClauseIssue (Plain English)Fix
πŸ”΄8.3Unlimited liability β€” no cap at allCap at 100% of fees paid
πŸ”΄12.1Auto-renews for 3 years β€” not 1Change to 12-month renewal
🟑5.7IP assignment too broad β€” covers pre-existing IPCarve out pre-existing IP
🟒3.2Payment terms 45 days β€” slightly long but fineAccept or push to 30

EU AI Act Article 4 (AI literacy) is already in force.

Without structure, your team is improvising in the kitchen and slop may land on your clients' desks. These recipes are a fast way to show you're training the crew.

Your First 60 Seconds

Never used AI for document review? Start here. This works on any contract, policy, or proposal. Try it right now.

1
Open Claude, ChatGPT, or Copilot. Upload any document you're working on.
2
Paste this prompt:
"We are preparing the attached for signature. Check for grammar, formatting errors, inconsistencies, broken cross-references, missing blanks, and anything that is unusual or doesn't look right."
3
Read the output. Notice how it found things you missed.

That took 10 seconds to set up. Now scroll down for the recipes that do the hard stuff.

New to AI Document Review?

Three steps. That's it. You'll be running your first recipe in under a minute.

1. Pick Your Tool

Open any major AI assistant: Claude, ChatGPT, Microsoft Copilot, or Google Gemini. All recipes work on all platforms.

Not sure which? Copilot for Microsoft teams, Claude for deep analysis, Gemini for large docs. Full guide below.

2. Upload Your Document

Drag and drop any contract, policy, proposal, or agreement into the chat. Most tools accept PDF, Word, or plain text.

3. Paste the Recipe

Copy any prompt from below and paste it in. Fill in the bracketed placeholders (your name, deal value, concerns) and hit send.

AI Contract Review Prompt Recipes

Pick one. Copy it. You're done.

Click a situation to filter, or search by contract type.

What makes a recipe different from a prompt

Anyone can type a question. A recipe bundles everything the AI needs to give you a consistent, actionable result, not a lucky guess.

πŸ§‚ Ingredients Your document, your role, the deal value, your specific concern
πŸ”§ Tools Which AI to use and how to set it up for this task
πŸ‘¨β€πŸ³ Method A structured instruction that tells the AI exactly how to think
🍽️ The Dish A defined output format β€” a table, a list, a position β€” you can act on immediately
🌢️ ⏱ 2 min Start Here

The Red Flag Finder

The workhorse of the kitchen.

Compare any document against a reference and get a colour-coded risk table with plain-English explanations and ready-to-paste fixes.

You'll need Two documents: original + markup
You'll get Risk table with traffic lights + surgical edits
[Context: We are [insert your entity] and the estimated value of this deal is [insert amount].] We have received the attached markup. Compare the proposed document to the reference (e.g. our original unmodified template) and produce: A. Quick executive one-line bullet summary of a few top headline risks or gaps to the extent they are different from our template (if at all). B. Then a comprehensive table comparing only differences between the documents (if any) with these columns: (1) Risk Level (Red / Amber / Green: use colours, start with highest financial and regulatory risk) (2) Type: e.g., New obligation, Missing term, Inconsistency, Ambiguity, Other. (3) Clause Reference (or section) (4) Issue (ELI5): explain in plain language why it matters in practice (5) Proposed Fix: short surgical edit (if needed) and pushback comment and explanation I can include in the markup. Also ensure you flag all material differences, missing items, and internal inconsistencies.
Kitchen Rule: Always tell the AI who you are and what the deal is worth. "Review this contract" is a lottery ticket. Context turns it into a strategy.
Example Output (anonymised)

Executive Summary: Supplier has significantly expanded liability exposure and removed key protections around IP, termination, and data handling.

RiskTypeClauseIssue (Plain English)Proposed Fix
πŸ”΄Missing term8.3No liability cap β€” supplier exposed to unlimited claimsInsert: "aggregate liability shall not exceed 100% of fees paid in the preceding 12 months"
πŸ”΄New obligation12.1Auto-renewal changed from 1 year to 3 years with 180-day notice to exitRevert to 12-month renewal, 90-day notice
🟑Ambiguity5.7IP assignment clause doesn't carve out pre-existing IPAdd: "excluding each party's pre-existing intellectual property"
🟒Inconsistency3.2 / 9.1Payment terms say 30 days in clause 3.2 but 45 days in Schedule 2Align to 30 days throughout
🌢️🌢️🌢️ ⏱ 15 min Start Here

The Eight-Eye Review

See it from every angle.

Review the same document from 8 different stakeholder perspectives in a single conversation. Each angle catches what the others miss.

You'll need One document β€” attached at Step 1
You'll get 8 distinct critiques that compound

Run these sequentially in the same conversation. Attach your document at Step 1. The AI carries context forward. You don't need all 8 every time. Steps 1–4 catch 80% of issues.

1

Senior External Counsel

Critique this as a magic circle lawyer.
2

US Big Law Partner

Now review it as a US V10 partner. Assume you received this draft from an associate.
3

The Counterparty

Now review from the perspective of the counterparty, assume it's a [describe counterparty].
4

Their CEO

What about from the counterparty CEO's perspective? What's there, what's missing, any gotchas for them or us?
5

The Gap Check

Let's review this conversation. What stakeholder perspective have we missed? What else should we consider?
6

The Person Who Administers It

Now review from a contract specialist's perspective. How to make it easier to administer? What variables should be in a schedule upfront? Redundancies, overlaps, inconsistencies?
7

The Clause Auditor

Evaluate each clause: is it necessary to provide assurances, set expectations, allocate responsibility, or is it redundant? Summary table please.
8

The Industry Benchmark

Benchmark against industry standards, particularly SEC.GOV filings, to ensure best practices. Where do we diverge and why?
Kitchen Rule: Chain, don't one-shot. Each step builds on the last. The AI carries context forward.
🌢️🌢️ ⏱ 5 min Start Here

The Sticking Points

What they always push back on.

Upload multiple negotiated versions and map what counterparties consistently resist. Stop solving the same problem deal by deal.

We are reviewing our template [contract/policy/agreement]. Attached are [number] versions with markup from [counterparties/clients/vendors] as well as comments from our [operations manager/team lead]. Can you put together a table summarising: 1. What edits each counterparty made and why 2. How our team responded 3. Patterns: what keeps coming up across multiple negotiations Map out the friction points so we can address them in the template rather than re-negotiating every time.
The more documents you upload, the better the pattern detection. 3–5 is the sweet spot.
🌢️ ⏱ 1 min Start Here

The Last Look

The safety net before signature.

Catches typos, broken cross-references, missing blanks, and formatting gremlins. Use after all negotiations are complete.

We are preparing the attached for signature. Can you check for any grammar, formatting or other errors or inconsistencies please.

This is a final sweep for polish, not substantive review. The AI checks for: typos, grammar, inconsistent defined terms, formatting issues, broken numbering, internal cross-reference errors, missing blanks, signature block completeness, date inconsistencies, and schedule references that don't match.

🌢️ ⏱ 2 min

The Crash Course

Get up to speed, fast.

New to a deal or document type? This creates a briefing for anyone who knows the domain generally but not this specific agreement.

I would like to develop a crash course for an experienced professional familiar with [domain] generally, but not the specifics of this particular agreement, see attached. What are some differences and unique issues to call out? What to highlight to their attention? Please organize the answer into: (a) scope/architecture, (b) liability/audit/termination, and (c) top 10 'watch-outs', with page/section cites to the attached.
🌢️🌢️ ⏱ 5 min

The Trickle-Down Check

What flows down to subcontractors?

Extract all flow-down obligations or gap-check your subcontractor flow-downs against a client contract.

We have received a template contract from a client, which is attached. We would like to check if all required flow downs from that contract to subcontractors are already in our standard flow downs list, which is also attached. Please perform gap analysis, cross reference and prepare a summary table.
🌢️🌢️ ⏱ 5 min

The Deadwood Cutter

Which clauses just restate the law?

Identify which clauses merely restate what the law already requires versus which create new obligations. Cut the noise from your templates.

Analyse this contract and create a table summarising to what extent each clause of the agreement simply restates the default position at law and to what extent it materially imposes a new or different obligation. So, for example, if the clause says that a party will comply with anti-bribery laws, then it simply states what a party is required to do anyway. But if it is about requirements of a law which is not law of the contract, then it could be a new or additional requirement.
🌢️🌢️ ⏱ 5 min

The Side-by-Side

Map multiple agreements at once.

Compare a portfolio of contracts to understand which model works best on pricing, termination, liability, structure.

We need to map out differences between the attached [number] contracts to understand pros and cons, pricing and commercial model, termination rights etc so that in the future we can pick the right models. Generate a comparison table covering: scope, commercial model, liability caps, termination rights, IP ownership, and any unusual provisions.
🌢️🌢️🌢️ ⏱ 20 min

The Overhaul

Fix a bloated, broken template.

Comprehensive template surgery: targets verbosity, weak risk allocation, placeholders, and structural problems.

Let's identify drafting improvements to the attached document to address the following issues: β€’ Bloated and generic: The draft reads like a heavily templated document with minimal tailoring to the actual commercial relationship. It's verbose, repetitive, and lacks commercial sharpness. β€’ Risk allocation is weak: Provisions are not sufficiently protective for either party. Clauses and their interaction is poorly thought through. β€’ Drafting discipline is inconsistent: There are placeholders, bracketed options, and references to "update later" that should never appear in a final draft. β€’ Structure is serviceable but lacks clarity: parts are over-engineered; some clauses are buried and hard to navigate.
🌢️🌢️🌢️ ⏱ 15 min

The Fill-in-the-Blanks

Front-load the variables, fix the boilerplate.

Convert any document into a modular format with variable data in boxes up front and standard clauses behind. Makes templates self-service.

Objective: Convert the attached template into a modular format with a front page containing variable data ("Part I boxes") and a second section with standardized clauses ("Part II") that reference the Part I data. Instructions: 1. Identify all variable elements (e.g., parties, product/service descriptions, pricing, delivery, payment, governing law, duration). 2. Extract these into a structured "Part I" section using labeled boxes. 3. Keep the original language but rewrite clauses in "Part II" to reference the relevant Part I boxes (e.g., "as specified in Box 4"). 4. Ensure clarity and accuracy in cross-referencing. 5. Format as: Β§ Part I – Boxes: Table or list of variables. Β§ Part II – Clauses: Numbered clauses referencing Part I. Keep boxes logically organised and streamlined.
Kitchen Rule: This is inspired by BIMCO maritime contracts: a tried-and-tested format used in shipping for decades. The idea: separate "what changes" from "what stays the same."
🌢️🌢️ ⏱ 5 min

The Deal Sheet Builder

Surface what matters before you start.

Identify negotiable vs. fixed terms, create an intake form, and build a client-facing FAQ explaining why certain terms aren't open for discussion.

Review the attached contract or template and identify: 1. Variables that should be surfaced up front (e.g. in a Deal Sheet or intake form) to avoid repeated edits or negotiation later. 2. Which terms are negotiable vs fixed, based on industry norms and practical efficiency. 3. Standard annexes or policies that should be referenced (e.g. DPA, InfoSec, Quality). 4. Anything buried in the body that should be elevated to the front (e.g. governing law, notice contacts). 5. A short client-facing FAQ outline explaining why certain terms are standardized. Keep output streamlined. Focus on what materially affects risk, cost, performance, or jurisdiction.
🌢️🌢️ ⏱ 5 min

The SOW Sanity Check

Hidden risks in the statement of work.

Review a SOW markup for vague deliverables, hidden time commitments, and scope creep. Traffic-light prioritised.

I am reviewing the attached SOW markup from a demanding client. Our standard template is attached for comparison. Your task: 1. Identify clauses that pose risks beyond our original template. Example: an SOW specifying 'weekly meetings', we assumed a quick one-person check-in, but the client expected multiple meetings with multiple stakeholders weekly. 2. Flag missing, burdensome or risky terms compared to our template. 3. Produce a table starting with highest risks. Traffic lights: - RED: potential material problems, burdens beyond standard practice - AMBER: negotiate if possible but minor practical risk - GREEN: nuisance we can live with Include clause references.
The "weekly meetings" example is real and exactly the kind of thing AI catches when you give it the right framing. Always explain the practical risk you're worried about.
🌢️🌢️ ⏱ 5 min

The Market Check

Is this standard? Or are we the outlier?

Benchmark your document against industry standards and SEC filings. Find what's better, worse, or missing before your counterparty does.

I am working on updating and streamlining the document that is attached. Please benchmark it against key industry comparators (you may find samples on sec.gov) to identify what big ticket items or issues we might be missing that might result in avoidable negotiations. Generate a summary table. Limit to 3–5 closest analogues; add brief citations/URLs; 1-page max.
🌢️ ⏱ 3 min

The Old vs. New

What changed? And should it have?

Clause-by-clause comparison of old vs. new template versions with risk-benefit analysis of every change.

I am working on an improved version of a template. I attach the old version and the new version. Prepare a detailed clause by clause comparison table to highlight risks and benefits of moving on to the new template.

Follow up with: "What are the surgical edits to align it back to beneficial positions in the old version and remove new risks?"

🌢️🌢️ ⏱ 3 min

The Cold Read

No template? No problem.

Reviewing a document for the first time with no internal template to compare against. Uses industry standards and SEC filings as the benchmark.

We are a [buyer/seller]. Compare the proposed attached contract to industry standard (research sec.gov contracts for examples where buyer has more market power and seller has more market power). Produce: (1) Quick verdict if ok to sign and an executive one-liner summary of top headline risks or gaps. (2) A comprehensive table: - Risk Level (Red / Amber / Green: start with highest risk) - Clause Reference - Issue (ELI5) - Proposed Fix: surgical edit or pushback comment - Type: New obligation, Missing term, Inconsistency, Ambiguity, Other Include all material differences, missing items, and internal inconsistencies.
🌢️🌢️ ⏱ 3 min

The Deep Cut

One clause. Full market analysis.

When a single clause is the sticking point. "Is this reasonable? What's market? What are the considerations?"

[Copy recitals and context] The following clause has been proposed: [Insert clause text] Counterparty said: [Insert their comment/concern] Is this reasonable? What is market and what are considerations? Check industry guidance/templates as well as sample contracts on sec.gov.
🌢️🌢️🌢️ ⏱ 5 min Specialist

The Non-Compete Check

Would this actually hold up?

Comprehensive enforceability analysis of post-termination restrictions under UK/Irish law. Traffic-light assessment with duration benchmarks.

You are a legal analyst reviewing post-termination restrictions (PTRs) in employment contracts under UK/Irish law. Analyze each restrictive covenant and output in a table format. REQUIRED ROWS (13): 1. Employee Name/Role | 2. Role Level | 3. Non-Compete Duration | 4. Non-Compete Scope | 5. Non-Solicit Duration | 6. Non-Solicit Scope | 7. Non-Poach Duration & Scope | 8. Garden Leave | 9. Confidentiality | 10. Consideration/Timing | 11. Severability | 12. Unusual Provisions | 13. Overall Enforceability TRAFFIC LIGHTS: πŸ”΄ Likely unenforceable, fatal flaws 🟑 Uncertain enforcement, material concerns 🟒 Well-drafted, likely enforceable FATAL FLAWS (trigger πŸ”΄): "All customers" without lookback; "in any capacity" without specificity; Duration >12m; No identifiable business interest; Vague critical terms. DURATION BENCHMARKS: Junior 3-6m | Mid 6m | Senior 6-9m | Executive up to 12m (rare). Irish preference: 6m safer; 12m outer limit. Include KEY RISKS and RECOMMENDED REMEDIATION after the table.
Note: This is a simplified version. Works best with Claude or GPT-4.
🌢️ ⏱ 2 min Specialist

The NDA Quick-Check

Benchmarked against the OneNDA standard.

Fast NDA assessment against the open OneNDA standard. Risk table with surgical fixes.

Compare the proposed NDA to the OneNDA reference and produce: Quick executive one-line summary of top headline risks or gaps (if any). Then a table comparing only differences with columns: (1) Risk Level (Red / Amber / Green) (2) Type: New obligation, Missing term, Inconsistency, Ambiguity, Other (3) Clause Reference (4) Issue (ELI5): why it matters in practice (5) Proposed Fix: surgical edit and pushback comment Flag all material differences, missing items, and internal inconsistencies.
🌢️🌢️🌢️ ⏱ 10 min Specialist

The HIPAA Check

BAA review against Common Paper standard.

HIPAA Business Associate Agreement assessment distinguishing mandatory requirements from optional pro-party terms.

Review the attached Business Associate Agreement (BAA) using the Common Paper BAA v1.0 as reference. Tasks: 1. Identify mandatory HIPAA requirements (per 45 CFR Β§Β§ 164.502(e), 164.504(e), 164.308–164.316, 164.410). 2. Identify optional clauses (Pro–Covered Entity or Pro–Business Associate). 3. Flag missing or overly strict terms vs. HIPAA or Common Paper default. 4. Apply risk levels: HIGH (blocker), MEDIUM (negotiate), LOW (can live with). 5. Provide pushback language for optional terms beyond HIPAA/Common Paper defaults. Return: Executive Summary: Overall posture, HIGH/MEDIUM risks with fixes, what we can accept. Summary Table: Clause | Status vs HIPAA | Citations | Risk | Pushback/Redline Quick Takeaways: Compliance status, negotiation focus, risk posture.
🌢️ ⏱ 30 sec Essential

The Sanity Check

Have we gone too far? Or not far enough?

Run this after any detailed analysis. Catches over-engineering, redundancies, and missing practical considerations.

Now let's do a common sense and practical check on these. Anything missing or could be clarified or simplified, anything redundant or conflicting with the original template?

Follow up with: "Can you just double check that you haven't missed anything, nor have you gone too far on anything."

This is the most underrated prompt in the kitchen. AI can over-engineer. The calibration question catches it every time.
🌢️ ⏱ 30 sec Essential

The Replay

How could I have done that better?

After any AI session, ask it to critique your prompting. The fastest way to get better at this. You'll be surprised what you learn about your own assumptions.

Let's review this conversation. How could I have prompted you better?
Kitchen Rule: Use this at the end of every significant AI session. You're not just doing the work: you're building a system that makes the next session faster.
🌢️ ⏱ 30 sec

The Recipe Saver

Turn a good session into a reusable prompt.

Extract your prompts from any conversation with AI's clarifications appended. Build your own personal playbook over time.

I would like to save the instructions I provided, as well as any clarifications or suggestions you provided which I agreed with (expressly or impliedly) from this conversation. To the extent I disagreed or clarified what you said, include that to. I want this so I can add it to my prompt library and use in future conversations with AI. Ideally I want it formatted as one prompt, but potentially a chain of prompts if that might work better.

Save to OneNote, Notion, or a shared team wiki. Over time, you'll build a library tailored to your specific domain and organisation.

Why These Recipes Work

Not a single prompt. A strategy. Four stages that turn AI from a parlour trick into a reliable workflow.

Triage

Get the lay of the land. What's standard, what's unusual, what matters? One prompt gives you a map of where to spend your time.

Stress-Test

Review from multiple angles. Senior counsel, counterparty, CEO, contract admin. Each perspective catches what the others miss.

Synthesise

Consolidate into actionable output: a checklist, a comparison table, a markup with surgical edits. Not a 40-page memo.

Operationalise

Turn insights into reusable assets: templates, playbooks, intake forms. The goal isn't a one-off answer. It's a system.

Tools of the Trade

Every kitchen needs good knives. These recipes work on any major AI platform, but some tools are better suited to specific jobs. Here's what we've found after hundreds of document reviews.

Microsoft Copilot β†—
The daily workhorse
Recommended daily driver Microsoft ecosystem
Deep Think Mode
The workhorse for contract review. Thorough analysis with seamless integration into Word, Outlook, and the wider Microsoft ecosystem. Use only in 'deep think' mode unless you don't care about quality.

Why we recommend it: If your team lives in Microsoft 365, Copilot's integration is unmatched as a daily driver to review a contract, draft an email, update a tracker, all without switching tools. Simpler setup, less friction. However, being currently reliant on OpenAI, same issues as ChaGPT.

Claude (Anthropic) β†—
The specialist knife
Best for reports & analysis Deep thinking
Opus + Extended Thinking
Currently the best model for presentations, reports, and deep analysis. If you need a 20-page due diligence report or a board-ready deck. Don't bother with anything else.
Sonnet
Strong daily driver. Fast, reliable, excellent for most contract review recipes. Good balance of quality and speed.
Haiku
The prep chef. Perfect for bulk work: transcribing scanned documents, cleaning messy data, organising large batches of files.

A word of caution: It seems to have the best style and feel, but can feel brittle at times. Not uncommon for it to stop generating or responding for no apparent reason.

Widely available Good default
With Deep Thinking On
Solid option for document review. Make sure you explicitly enable deep thinking mode. Without it, the routing mechanism can default to a lighter model that produces less thorough output.

The taste of slop: Success breeds its own problems - ChatGPT can smell a bit like slop since it is so ubiquitous. It also seems the most sycophantic of the models, so important to prompt it adversarially. Eg don't say 'I've drafted the contract' but say 'a new lawyer sent me this contract and I am concerned it is full of holes' so it's a bit more critical.

Gemini (Google) β†—
The heavy lifter
Best for large context Numbers & research
Pro
Unmatched for large context i.e. loading many long documents or very large files in a single session. Also the strongest model for numerical analysis and financial data in contracts. Produces fantastic research reports.
Flash / Mini
Like Haiku, great at quickly prepping things. Very fast, very reliable. Excellent for bulk document processing, data cleanup, and formatting tasks.

Watch out While known for being able to find a needle in the haystack, it can be prone to dropping large parts of text or bits without telling you. Requires constant vigilance and asking "have you left anything out?"

A note on other models: Chinese-developed models (DeepSeek, Qwen, etc.) can deliver excellent results. However, many Western organisations restrict their use for data compliance reasons. Self-hosted or custom-integrated models are also an option in principle, but tend to be either too brittle, too expensive, or too quickly overtaken to justify the investment for most document-heavy teams. Focus on the platforms above.

Test Before You Trust

AI models improve (and change) constantly. We recommend regularly testing your prompts side-by-side on LMSys Chatbot Arena to make sure you're using the best tool for the job. What was best three months ago may not be best today.

Multi-Course Meals

Recipes are ingredients. These are full meals: chained sequences for complete workflows.

1

The Red Flag Finder

Run the traffic-light risk table. Get your executive summary and clause-by-clause breakdown.

2

The Sanity Check

Catch over-engineering, redundancies, and anything that doesn't pass the "would a judge care?" test. Done.

1

The Deal Sheet Builder

Surface what's negotiable vs. fixed. Build your intake form before touching the drafting.

2

The Overhaul

Attack the bloat. Tighten risk allocation, fix inconsistencies, remove placeholders.

3

The Fill-in-the-Blanks

Front-load the variables into boxes. Separate deal terms from boilerplate.

4

The Market Check

Benchmark against SEC filings and industry standards. Find gaps before your counterparty does.

5

The Sanity Check

"Are we missing anything salient? What would be a fantastic idea?"

1

The Crash Course

Scope, liability, and top 10 watch-outs. Get oriented fast.

2

The Market Check

Position the document against industry comparators.

3

The Eight-Eye Review

The full gauntlet, all 8 perspectives. This is where the deep insights live.

4

The Sanity Check

Final calibration before you ship it.

1

The Red Flag Finder

Map every deviation from your template with risk levels and proposed fixes.

2

The SOW Sanity Check

Check the SOW doesn't contradict or weaken your MSA protections.

3

The Deep Cut

Deep-dive on specific friction points. "Is this reasonable? What's market?"

4

The Sticking Points

If you have multiple deals, map the patterns. Find what they always push back on.

1

The Replay

"How could I have prompted you better?" - the single most underused prompt in AI.

2

The Recipe Saver

Extract and save your prompts. Build your personal playbook over time.

Pantry

Open-source reference standards we use as benchmarks. You don't need all of these as each recipe tells you which ones it uses.

Bonterms β†—

Open-source building blocks for technology transactions: modular, mix-and-match terms for cloud, SaaS, and data agreements.

Used in: Template development recipes

OneNDA β†—

A free, standardised NDA that anyone can adopt. Used as the benchmark in The NDA Quick-Check so you can instantly see how a proposed NDA deviates from a known standard.

Used in: The NDA Quick-Check

Common Paper β†—

Open-source contract standards for cloud, SaaS, and HIPAA agreements. The HIPAA Check uses their BAA v1.0 as the reference for what "market" looks like.

Used in: The HIPAA Check

WorldCC Standards β†—

Global benchmarks for commercial contracting terms. Useful when you need to argue that a position is (or isn't) market standard across industries.

Used in: The Market Check, The Sticking Points

SEC EDGAR β†—

Public filings contain real, negotiated contracts between real companies. The best free source of "what does market actually look like?" for specific industries.

Used in: The Market Check, The Cold Read, The Eight-Eye Review

GitLaw β†—

The concept and community treating law as code. Collaborative, version-controlled legal documents applying software engineering principles to contract drafting.

Used in: Contract drafting

Lessons from the Kitchen

Hard-won principles from hundreds of AI-assisted document reviews.

Context is everything

Always tell the AI who you are, what the deal is worth, and what your concerns are. "Review this contract" is a lottery ticket. Adding three lines of context turns it into a strategy.

Tables beat prose

Request tables with clear columns and traffic-light risk levels. Ask for "ELI5" plain language. Your stakeholders will thank you and you'll actually use the output.

Chain, don't one-shot

Break complex tasks into steps. SOW review β†’ MSA reconciliation β†’ market benchmark. Each step builds on the last. The AI carries context forward within a conversation.

Say "no hyperlinks"

When you need clean output for a markup or email, explicitly say: no markdown, no HTML, no brackets, no footnotes. Otherwise you'll get citation noise you have to strip out.

Always double-check

After any detailed analysis, run The Sanity Check. AI can over-engineer. "Have you gone too far on anything?" is surprisingly effective at catching it.

Learn from every session

End with The Replay and save your prompts with The Recipe Saver. You're not just doing the work, you're building a system that compounds!

Frequently Asked Questions

Common questions about using AI for contract and document review.

AI can reliably triage contracts, flag risk, compare documents against templates, and catch formatting errors. It works best with structured prompts that give context (who you are, deal value, your concerns) rather than vague requests like 'review this'. It should augment human review, not replace it.
The most effective contract review prompts include context (your role, deal value), specify the output format (risk table with traffic lights), request plain-English explanations, and ask for specific fixes. A generic 'review this contract' produces generic output. Structured prompts produce actionable risk tables.
Microsoft Copilot in deep-think mode and GPT-5.2 are strong daily workhorses with good Microsoft ecosystem integration. Claude Opus 4.6 in extended thinking mode excels at deep analysis and reports. Google Gemini Pro handles large context windows and numbers best. The right tool depends on your workflow. Test regularly on LMSys Chatbot Arena.
Check your AI provider's data policies. Enterprise tiers of Claude, ChatGPT, Copilot, and Gemini typically offer data processing agreements and do not train on your inputs. Always verify compliance with your organisation's data handling policies before uploading sensitive documents.
Start with a simple pre-signature check: upload a document and ask AI to check for grammar, formatting errors, inconsistencies, broken cross-references, and missing blanks. Once comfortable, progress to structured risk-table prompts, multi-perspective reviews, and template development workflows.

Built from 15 years of cross-border legal practice across pharma, tech, and maritime and hundreds of AI-assisted document reviews. These recipes exist because structured prompting shouldn't be a secret.

Curated by Sergey Kinchin, PhD (AIGP, GSEC) Β· Dublin

Need governed AI workflows built into your environment? That's what I do β†’
Copied to clipboard