Prompt Chaining: The Full Requirements Pipeline from Notes to Sprint-Ready Stories

Tools:Claude Pro or ChatGPT Plus
Time to build:60 minutes
Difficulty:Intermediate
Prerequisites:Comfortable using ChatGPT or Claude for requirements drafting. See Level 3 guide: "Build a Requirements Documentation Workflow with ChatGPT Plus"

What This Builds

A 5-step prompt chain that takes your raw meeting notes all the way through to sprint-ready user stories with acceptance criteria, a quality-reviewed requirements section, UAT test cases, and a stakeholder communication. The whole workflow runs in 30 minutes instead of half a day.

Prerequisites

  • ChatGPT Plus ($20/month) or Claude Pro ($20/month)
  • Your current requirements format or template
  • Meeting notes from a recent or upcoming workshop
  • Basic familiarity with writing prompts in a chatbot

The Concept

Prompt chaining means each step in a conversation builds on the output of the previous step. Instead of five separate conversations, each starting from scratch, you run one conversation where Step 2 uses Step 1's output as input, Step 3 uses Step 2's output, and so on.

Think of it like a production line where each workstation refines the product a little further. The output of each station is the input for the next. At the end, you have a finished product.

The 5-step pipeline:

  1. Extract: pull raw requirements from messy notes
  2. Formalize: convert to structured BRD language
  3. Story-ize: convert to user stories with acceptance criteria
  4. Review: quality check for gaps, ambiguity, testability
  5. Communicate: write the stakeholder summary

Build It Step by Step

Part 1: Set Up the Context

Start a new conversation in Claude Pro or ChatGPT Plus. Paste this at the very start. It's your "factory configuration":

Copy and paste this
You are my requirements documentation assistant. We're going to process a set of meeting notes through a 5-step pipeline. I'll give you Step 1 instructions now, and after you complete each step, I'll give you the next step. Do not combine steps — complete one fully before I ask for the next.

My documentation standards:
- User stories: "As a [role], I want [capability], so that [benefit]"
- Acceptance criteria: Given/When/Then format, specific and testable
- Requirements: Active voice, present tense, numbered FR-001 format
- BRD sections: Executive Summary, Functional Requirements, Non-Functional Requirements, Assumptions, Open Questions

Confirm you're ready for Step 1.

Wait for confirmation, then proceed.

Part 2: Step 1: Extract Raw Requirements

Copy and paste this
STEP 1: EXTRACT

Here are the raw notes from today's requirements workshop. Extract all business needs, desired capabilities, and constraints mentioned by stakeholders. Present as a flat numbered list with no formatting yet, just capturing every distinct requirement or request you can identify.

Don't miss anything even if it seems minor. Include things that were mentioned as "nice to have" and mark those with (nice-to-have).

NOTES:
[paste your meeting notes here]

Review the output: Are all the key items captured? Add any you noticed that were missed.

Part 3: Step 2: Formalize Requirements

Copy and paste this
STEP 2: FORMALIZE

Take the extracted list from Step 1. Rewrite each item as a formal requirement using "the system shall [specific capability]" format. Group them into:
- Functional Requirements (FR-001, FR-002...)
- Non-Functional Requirements (NFR-001...)
- Assumptions (assume-001...)
- Out of Scope items

Remove duplicates. Split any compound requirements into separate items.

Part 4: Step 3: Convert to User Stories

Copy and paste this
STEP 3: STORY-IZE

Take the Functional Requirements from Step 2. Convert each into user stories with acceptance criteria.

Format:
Story: As a [user role], I want [capability], so that [benefit]
Acceptance Criteria:
  AC1: Given [precondition], when [action], then [expected result]
  AC2: Given...
  (3-5 ACs per story)

For NFRs that can't be user stories, keep them as system requirements with measurable acceptance criteria.

Part 5: Step 4: Quality Review

Copy and paste this
STEP 4: QUALITY REVIEW

Review all the user stories and requirements you just wrote. For each issue you find:
- Quote the specific story/requirement
- Describe the problem: ambiguous language / untestable / missing edge case / conflict with another requirement / too large (needs splitting)
- Provide a corrected version

Then: list any important requirements that seem MISSING from what you'd expect for this type of feature.

Apply the suggested fixes and update the requirements.

Part 6: Step 5: Stakeholder Communication

Copy and paste this
STEP 5: COMMUNICATE

Based on the finalized requirements from Steps 3-4, write two communications:

A) A 1-paragraph Jira epic description for the development team: technical summary, scope, what's in/out

B) A 3-bullet stakeholder summary for the project sponsor: what we're building, why it matters to the business, key decisions or open questions that still need sponsor input

Real Example: CRM Lead Management Feature

Context: You've just run a 75-minute requirements workshop for a new lead scoring feature in Salesforce.

Raw notes you paste in Step 1:

Prompt

"Marketing wants to auto-score leads based on website activity. Sales says they only want to see leads above 50 points. IT said we can use Salesforce's native scoring but the license costs more. Product owner said mobile alerts for hot leads are essential. Compliance mentioned GDPR — leads from EU need consent flag. Someone asked about integration with HubSpot but the PM said that's out of scope. Finance wants to track ROI by lead source."

Step 1 output: 7 distinct requirements extracted, 2 nice-to-haves flagged

Step 2 output: FR-001 through FR-006, NFR-001 (GDPR), 1 out-of-scope item

Step 3 output: 6 user stories with 4 ACs each = 24 acceptance criteria written in 90 seconds

Step 4 output: 3 quality issues flagged: "above 50 points" needs a source of truth definition, mobile alerts needs a threshold definition, GDPR flag has no defined workflow

Step 5 output: One Jira epic description and one sponsor summary email draft

Total time: 30 minutes including note-pasting and review, versus 4-5 hours manual


What to Do When It Breaks

  • Step 3 produces generic user roles ("As a user") → Before Step 3, add: "The user roles in this system are: [list specific roles from your project, e.g., 'Sales Rep,' 'Marketing Manager,' 'Compliance Officer']. Use these specific roles in every user story."
  • Step 4 misses issues → The AI quality check finds structural problems well but may miss domain-specific issues. Always do a final read-through yourself, focusing on domain logic.
  • Context gets lost in long conversations → If the conversation exceeds ~30 exchanges, start fresh. Paste the Step 3 output as your starting context: "Here are finalized user stories we're working with: [paste]. Please perform Step 4 Quality Review."
  • Output format drifts → Add "Maintain the exact format from the previous steps" at the start of each new step prompt.

Variations

  • Simpler version: Just run Steps 1 and 3 and skip Steps 2 and 4 for shorter timelines. Still saves 2+ hours.
  • Extended version: Add Step 6: "Generate a requirements traceability matrix linking each story to its business objective, and estimate relative complexity (XS/S/M/L/XL) for sprint planning."

What to Do Next

  • This week: Run one full workshop through this pipeline end-to-end
  • This month: Save the 5 step prompts in a text file or Note for quick access. Name it "BA Requirements Pipeline."
  • Advanced: Combine this pipeline with the Claude Project setup (Level 4 guide). Your formatting instructions are pre-loaded, so you only need to paste the step instructions and notes.

Advanced guide for Business Analyst professionals. Best results with Claude Pro or ChatGPT Plus for longer context handling.