Prompt Templates

Ralph Prompt: 75+ Copy-Paste Ready Templates for Self-Improving AI (2026)

Master ralph prompts: 75+ templates for self-improving AI with atomic tasks, pass/fail criteria, and automatic iteration. Copy-paste ready.

Ralphable Team
113 min read
ralph promptclaude codeiterative promptingai promptsself-improving prompts

# Ralph Prompt: 75+ Copy-Paste Ready Templates for Self-Improving AI (2026)

Introduction: The End of "Good Enough" AI Output

For years, we've been trapped in a cycle of prompt-and-pray AI interactions. You write a prompt, cross your fingers, and hope the AI produces something useful. When it doesn't, you tweak the prompt, try again, and repeat this frustrating dance until you either settle for "good enough" or give up entirely. This traditional prompting approach has a fundamental flaw: it treats AI as a one-shot generator rather than an iterative problem-solver. The result? Wasted time, inconsistent quality, and AI outputs that look promising but fail under real scrutiny.

Enter the Ralph Prompt—a revolutionary approach that transforms AI from a suggestion engine into a self-improving problem-solver. Named after the Ralph Loop methodology developed at Ralphable, a ralph prompt doesn't just ask for output; it creates a systematic process where the AI breaks complex work into atomic tasks, defines explicit pass/fail criteria for each, tests its own work, and iterates automatically until every single criterion is met. This isn't about getting "pretty good" results; it's about achieving objectively correct, verifiable outcomes.

What makes ralph prompts fundamentally different is their built-in quality control mechanism. Traditional prompts might say "write a Python function to sort data," and you'll get something that looks right but might have edge cases or inefficiencies. A ralph prompt says: "Break this into atomic tasks: 1) Design the algorithm, 2) Implement with error handling, 3) Create test cases, 4) Run tests, 5) Fix any failures. For each task, define pass/fail criteria. Iterate until all criteria pass." The AI becomes its own quality assurance team, catching and fixing its mistakes without human intervention.

This article represents the most comprehensive collection of ralph prompt templates available anywhere. We've distilled months of research and testing into 75+ copy-paste ready templates that you can use immediately with Claude Code and other advanced AI systems. These aren't just theoretical concepts—they're battle-tested templates for code generation, content creation, data analysis, system design, debugging, and more. Each template follows the proven Ralph Loop methodology that ensures your AI doesn't stop at "looks good" but continues until "all criteria pass."

You'll discover how to structure prompts that make AI work like a senior engineer who documents their assumptions, tests their code, validates their logic, and refuses to deliver incomplete work. We'll show you the five essential components of every effective ralph prompt, provide detailed examples across multiple domains, and give you templates you can adapt to your specific needs. Whether you're building software, analyzing data, creating content, or solving complex problems, these ralph prompts will transform how you work with AI.

What Is a Ralph Prompt?

A ralph prompt is a structured instruction set that initiates what we call a "Ralph Loop"—a systematic process where AI breaks down complex work into small, verifiable pieces (atomic tasks), defines objective pass/fail criteria for each piece, tests its own output against those criteria, and automatically iterates until all criteria are satisfied. Unlike traditional prompts that produce a single response, a ralph prompt creates an ongoing conversation where the AI acts as both creator and critic, refusing to deliver work that doesn't meet explicitly defined standards.

The term originates from Ralphable, where we discovered that the most reliable AI outputs came from prompts that enforced rigorous self-testing. The core insight was simple: AI makes mistakes, just like humans, but unlike humans, AI can test its own work instantly and objectively if given the right framework. A ralph prompt provides that framework by requiring the AI to:

  • Decompose the problem into independently verifiable atomic tasks
  • Define clear, testable success criteria for each task
  • Execute each task while documenting its approach
  • Test its output against the defined criteria
  • Iterate on any failures until all criteria pass
  • Signal completion only when everything is verified
  • Let's examine a basic example to illustrate the difference. A traditional prompt for creating a website component might look like this: `` Create a responsive navigation bar with a logo on the left and three menu items on the right. `

    The AI might produce something that looks right but could have hidden issues: maybe it's not truly responsive on all devices, perhaps the menu doesn't work on mobile, or maybe the code has accessibility issues. You'd need to manually test it, find problems, and go back and forth with the AI.

    A ralph prompt for the same task transforms the interaction: ` I need a responsive navigation bar. Follow the Ralph Loop methodology:

    TASK CONTEXT: Create a production-ready responsive navigation bar with logo left, menu right.

    ATOMIC TASK BREAKDOWN:

  • Design HTML structure with semantic elements
  • Create CSS for desktop layout
  • Create CSS for mobile responsiveness with hamburger menu
  • Add JavaScript for mobile menu toggle
  • Implement accessibility features
  • Cross-browser testing simulation
  • PASS/FAIL CRITERIA FOR EACH TASK: Task 1: Must use <nav>, <ul>, <li> elements appropriately Task 2: Must align logo left, menu right on screens >768px Task 3: Must collapse to hamburger menu on screens <769px Task 4: Must toggle menu visibility on click Task 5: Must include ARIA labels and keyboard navigation Task 6: Must render correctly in Chrome, Firefox, Safari

    ITERATION LOGIC: After completing all tasks, test each criterion. If any fail, diagnose the issue, fix it, and retest. Continue until all criteria pass.

    COMPLETION SIGNAL: Only say "ALL CRITERIA PASS: Navigation bar complete" when every single criterion above is satisfied.

    Begin the Ralph Loop now. `

    The AI will now approach this systematically. It will first break down the work, then for each atomic task, it will define even more specific criteria. When it writes the HTML, it will check if it used semantic elements. When it creates the CSS, it will verify the layouts. It will test the mobile responsiveness, check the JavaScript functionality, validate accessibility, and simulate cross-browser rendering. If the hamburger menu doesn't work on the first try, the AI will diagnose why, fix it, and retest—all without you asking.

    This approach works because it leverages AI's strengths (rapid iteration, pattern recognition, code generation) while mitigating its weaknesses (overconfidence, missing edge cases, inconsistency). The ralph prompt creates a feedback loop where the AI's output becomes input for its own quality assessment. This is particularly powerful with Claude Code, which can execute code, test outputs, and analyze results within a single conversation.

    The philosophical shift is significant: instead of viewing AI as a tool that produces answers, we view it as a process that produces verified solutions. This aligns with how expert humans work—we don't write code and assume it works; we write tests, run them, fix failures, and repeat. The ralph prompt simply makes this rigorous engineering mindset explicit and enforceable in AI interactions.

    The Anatomy of a Perfect Ralph Prompt

    Every effective ralph prompt contains five essential components that work together to create the self-improving loop. Missing any component reduces the effectiveness, while mastering all five creates AI interactions that consistently produce verified, production-ready results.

    1. Task Context

    The task context sets the stage by clearly defining what needs to be accomplished, why it matters, and any constraints or requirements. This isn't just a restatement of the request—it provides the "why" behind the "what," which helps the AI make better decisions during execution. Example template:
    `markdown TASK CONTEXT: [Clear description of the overall goal] [Why this task matters or how it will be used] [Any constraints: time, resources, standards, dependencies] [Success looks like: description of the end state] ` Complete example: `markdown TASK CONTEXT: Create a Python data validation module for user registration. This will be used in production with 100K+ daily users, so it must be robust. Constraints: Must use Pydantic v2, support async validation, and include comprehensive error messages. Success looks like: A reusable module that catches all invalid inputs before database insertion. `

    2. Atomic Task Breakdown

    This is where complex work gets decomposed into small, independently verifiable pieces. Each atomic task should be:
    • Specific enough to have clear boundaries
    • Independent enough to be testable on its own
    • Small enough that failure points are obvious
    • Sequential when dependencies exist
    Example template:
    `markdown ATOMIC TASK BREAKDOWN:
  • [First discrete unit of work]
  • [Second discrete unit of work]
  • [Third discrete unit of work]
  • ... continue until all aspects are covered
    ` Complete example: `markdown ATOMIC TASK BREAKDOWN:
  • Define Pydantic models for user input with type annotations
  • Add custom validators for password strength and email format
  • Create async validation functions for checking unique username/email
  • Implement error collection and user-friendly error messages
  • Write unit tests for valid and invalid inputs
  • Create usage examples in documentation
  • `

    3. Pass/Fail Criteria

    For each atomic task, define objective, testable conditions that determine success. These should be:
    • Binary (either pass or fail, no ambiguity)
    • Testable (the AI can verify them programmatically or logically)
    • Specific (avoid subjective terms like "good" or "clean")
    • Complete (cover all important aspects)
    Example template:
    `markdown PASS/FAIL CRITERIA: Task 1: [Criterion 1], [Criterion 2], [Criterion 3] Task 2: [Criterion 1], [Criterion 2] ... continue for all tasks ` Complete example: `markdown PASS/FAIL CRITERIA: Task 1: Models include username, email, password fields; All fields have type hints Task 2: Password validator requires 8+ chars, 1 uppercase, 1 number; Email validator checks format Task 3: Async functions query mock database; Return True/False without exceptions Task 4: All validation errors collected in list; Messages explain fix to user Task 5: Tests cover valid input, all invalid cases; All tests pass when run Task 6: Examples show basic usage, error handling, and async usage `

    4. Iteration Logic

    This component defines what happens when criteria fail. It should specify:
    • How failures are detected
    • The diagnosis process
    • The fix-and-retest cycle
    • When to try different approaches vs. debug current approach
    Example template:
    `markdown ITERATION LOGIC: After completing all tasks, test each criterion systematically. If any criterion fails: 1. Diagnose the root cause 2. Implement a fix 3. Retest that specific criterion 4. Continue testing remaining criteria Repeat until ALL criteria pass. If stuck after 3 attempts on same issue, try a fundamentally different approach. ` Complete example: `markdown ITERATION LOGIC: Complete all 6 tasks, then test each criterion in order. For any failing criterion: analyze why it failed, fix the issue, then retest. Example: If Task 5 tests fail, check if implementation is wrong or tests are wrong, fix accordingly. Continue loop until all 12 criteria (2 per task) pass. If same criterion fails twice, approach from different angle on third attempt. `

    5. Completion Signal

    The final component tells the AI how to indicate successful completion. This creates a clear endpoint and prevents premature stopping. Example template:
    `markdown COMPLETION SIGNAL: Only say "[SPECIFIC PHRASE]" when ALL criteria pass. Before that, continue iterating. Do not indicate completion prematurely. ` Complete example: `markdown COMPLETION SIGNAL: Only say "ALL CRITERIA PASS: Data validation module complete" when all 12 criteria pass. Before that, continue testing and iterating. Do not say "done" or "complete" until every criterion is verified. `

    Putting It All Together

    Here's a complete ralph prompt template you can copy and adapt:

    `markdown TASK CONTEXT: [Your overall goal and purpose] [Constraints and requirements] [What success looks like]

    ATOMIC TASK BREAKDOWN:

  • [Task 1 description]
  • [Task 2 description]
  • [Task 3 description]
  • [Add as needed]

    PASS/FAIL CRITERIA: Task 1: [Criterion 1], [Criterion 2] Task 2: [Criterion 1], [Criterion 2] Task 3: [Criterion 1], [Criterion 2] [Match to tasks]

    ITERATION LOGIC: After all tasks, test each criterion. For failures: diagnose, fix, retest. Continue until ALL criteria pass. If stuck, try different approach.

    COMPLETION SIGNAL: Only say "ALL CRITERIA PASS: [Project name] complete" when verified.

    Begin the Ralph Loop now. `

    The power of this structure is its adaptability. In the following sections, you'll see 75+ specialized templates applying this anatomy to different domains—from code generation to content creation to data analysis. Each maintains the five-component structure while adapting to specific use cases, giving you a comprehensive toolkit for self-improving AI interactions.

    # Ralph Prompts for Code Development (15 Templates)

    Ralph prompts transform Claude Code from a suggestion engine into an autonomous, iterative developer. These templates enforce the Ralph Loop—breaking work into atomic tasks with explicit pass/fail criteria, ensuring Claude tests, diagnoses, and iterates until every objective condition is met. Below are 15+ detailed, production-ready templates you can copy and paste directly.

    ---

    1. Function Implementation

    Use when: You need a robust, production-ready function with error handling and tests.
    `markdown RALPH PROMPT: Implement the function calculate_invoice_total based on the specification.

    SPECIFICATION:

    • Input: items (list of dicts with 'price' (float), 'quantity' (int), 'taxable' (bool)), discount_percent (float, 0-100), customer_type ('retail', 'wholesale')
    • Output: Final total (float), rounded to 2 decimal places.
    • Logic: Sum item subtotals (price * quantity). Apply 8% tax to taxable items only. Apply discount based on customer_type: retail gets discount_percent, wholesale gets discount_percent + 5%. Minimum charge is $1.00.
    ATOMIC TASKS:
  • Write function signature with type hints and a clear docstring.
  • Implement core calculation logic for item summation and tax.
  • Implement discount logic based on customer_type.
  • Add validation: ensure discount_percent is 0-100, quantities are positive, prices are non-negative.
  • Enforce the minimum $1.00 total.
  • Write 5 unit tests using pytest that cover edge cases (empty list, wholesale discount, zero taxable items).
  • PASS/FAIL CRITERIA:

    • [PASS] Function executes without syntax errors.
    • [PASS] 5/5 unit tests pass.
    • [PASS] Handles invalid input with clear ValueError messages.
    • [PASS] Output is correctly rounded to 2 decimals.
    • [PASS] Minimum total of $1.00 is enforced (e.g., $0.75 input yields $1.00).
    ITERATION LOGIC:
    • Run the unit tests. If any fail, analyze the failure, correct the function, and re-run ALL tests.
    • Manually test with the invalid input case {'price': -5, 'quantity': 1, 'taxable': True}. If no ValueError is raised, diagnose validation logic and fix.
    • Test the minimum charge edge case. If output is < $1.00, adjust logic.
    ` Explanation: This prompt forces Claude to act as a test-driven developer, not stopping at "code that works" but iterating until all validation and edge cases are formally verified.

    ---

    2. API Endpoint Development

    Use when: Creating a new Flask/FastAPI endpoint with full CRUD, validation, and error responses.
    `markdown RALPH PROMPT: Develop a RESTful API endpoint POST /api/v1/products for product creation.

    SPECIFICATION:

    • Framework: FastAPI. Use Pydantic for request/response models.
    • Database: Assume an async SQLAlchemy Product model with fields: id (int, PK), name (str), sku (str, unique), price (float), category_id (int, FK), is_active (bool).
    • Request Body: { "name": "string", "sku": "string", "price": number, "category_id": integer }
    • Behavior: Create product. SKU must be unique (return 409 if conflict). category_id must exist in database (return 404 if not found). Return 201 with created product data.
    ATOMIC TASKS:
  • Define Pydantic ProductCreate and ProductResponse schemas.
  • Write the FastAPI route decorator and function signature.
  • Implement database session dependency and async create logic.
  • Add integrity check for duplicate SKU (simulate query).
  • Add foreign key validation for category_id.
  • Implement proper HTTP exception responses (409, 404, 422).
  • Write 3 integration test cases (success, duplicate SKU, invalid category).
  • PASS/FAIL CRITERIA:

    • [PASS] Code is syntactically valid FastAPI.
    • [PASS] Pydantic schemas correctly validate/restrict input types.
    • [PASS] Simulated "duplicate SKU" condition returns a 409 status.
    • [PASS] Simulated "invalid category" returns a 404 status.
    • [PASS] All 3 integration tests pass when logically executed.
    ITERATION LOGIC:
    • Validate the Pydantic schema rejects {"price": "ten"}. If it accepts, tighten schema.
    • Test the duplicate SKU logic: if the code does not raise/return 409, debug the uniqueness check.
    • Run the integration test suite conceptually. For any failing scenario, revise the endpoint logic and retest.
    ` Explanation: This ensures the endpoint is robust against invalid data and real-world conflicts, with Claude iterating on validation and error handling until all HTTP criteria are met.

    ---

    3. Database Query Optimization

    Use when: An existing SQL query is slow; you need an optimized, indexed, and analyzed version.
    `markdown RALPH PROMPT: Optimize the provided slow SQL query for PostgreSQL.

    ORIGINAL QUERY: `sql SELECT o.id, o.order_date, c.name, SUM(oi.quantity * p.price) as total FROM orders o JOIN customers c ON o.customer_id = c.id JOIN order_items oi ON o.id = oi.order_id JOIN products p ON oi.product_id = p.id WHERE o.order_date > NOW() - INTERVAL '30 days' GROUP BY o.id, o.order_date, c.name HAVING SUM(oi.quantity * p.price) > 1000 ORDER BY total DESC; `

    ATOMIC TASKS:

  • Analyze the query: identify missing indexes, unnecessary joins, or inefficient clauses.
  • Rewrite the query for optimal performance (e.g., use CTEs, subqueries, better joins).
  • Propose 3 specific indexes (with CREATE INDEX statements).
  • Write an equivalent query using window functions if beneficial.
  • Provide a brief performance comparison explanation (what was improved).
  • PASS/FAIL CRITERIA:

    • [PASS] Rewritten query returns identical result set to original (logically verify).
    • [PASS] Proposed indexes are on columns used in JOIN, WHERE, and GROUP BY.
    • [PASS] No Cartesian products or unnecessary table scans are introduced.
    • [PASS] The HAVING clause logic is preserved and efficient.
    • [PASS] Explanation clearly states estimated performance gain (e.g., "Indexes avoid full scan on orders.date").
    ITERATION LOGIC:
    • Compare the output schema of the new query with the original. If different, adjust SELECT/JOIN logic.
    • Check if any proposed index is on a low-cardinality column (like is_active). If so, replace it with a more selective one.
    • Ensure the HAVING clause doesn't force calculation of all sums before filtering. If it does, consider moving logic to a subquery.
    ` Explanation: Claude must prove equivalence and justify each optimization, iterating until the query is both correct and demonstrably more efficient.

    ---

    4. React Component Creation

    Use when: Building a reusable, accessible, and stateful React component with TypeScript.
    `markdown RALPH PROMPT: Create a DataTable React component with sorting, pagination, and filtering.

    SPECIFICATION:

    • Tech: React 18+, TypeScript, Tailwind CSS.
    • Props: data (array of objects), columns (array defining key, header, sortable), pageSize (number).
    • Features: Client-side sorting (click headers), pagination (prev/next, page numbers), text filter input (filters all columns).
    • UI: Clean, accessible table with clear visual states for sort direction.
    ATOMIC TASKS:
  • Define TypeScript interfaces for DataTableProps, Column, and component state.
  • Build the component structure with JSX, using <table> and semantic HTML.
  • Implement sorting logic (toggle ascending/descending).
  • Implement pagination logic (slice data based on current page).
  • Implement global filter input and logic.
  • Add ARIA attributes for accessibility (aria-sort, aria-label).
  • Create a usage example with sample data.
  • PASS/FAIL CRITERIA:

    • [PASS] Component compiles with tsc --noEmit (no TypeScript errors).
    • [PASS] Sorting toggles correctly between asc/desc/unsorted on click.
    • [PASS] Pagination correctly limits displayed rows to pageSize.
    • [PASS] Filter input reduces visible rows based on text match in any column.
    • [PASS] All interactive elements have appropriate ARIA attributes.
    ITERATION LOGIC:
    • Run a TypeScript check. For any errors, fix the interface or prop usage.
    • Test sort logic: click a sortable column twice; it must cycle states. If stuck, debug the state management.
    • Test pagination with 25 items and pageSize=10. Page 3 should show items 21-25. If not, fix the slice calculation.
    • Verify filter: typing "test" with no matches should show empty table. If not, adjust filter function.
    ` Explanation: This loop ensures a fully functional, type-safe, and accessible UI component, with Claude iterating on interactivity and compliance until all criteria pass.

    ---

    5. Unit Test Suite

    Use when: You have existing code that lacks tests and needs comprehensive coverage.
    `markdown RALPH PROMPT: Write a complete pytest suite for the PaymentProcessor class.

    CLASS CODE: `python class PaymentProcessor: def __init__(self, gateway): self.gateway = gateway self.transactions = []

    def charge(self, amount, currency="USD"): if amount <= 0: raise ValueError("Amount must be positive") if currency not in ["USD", "EUR"]: raise ValueError("Unsupported currency") result = self.gateway.charge(amount, currency) self.transactions.append({"amount": amount, "currency": currency, "id": result["id"]}) return result

    def get_total_revenue(self, currency="USD"): total = sum(t["amount"] for t in self.transactions if t["currency"] == currency) return round(total, 2) `

    ATOMIC TASKS:

  • Create test file test_payment_processor.py.
  • Write fixtures for a mock gateway using unittest.mock.Mock.
  • Test charge(): success path, validates positive amount, validates currency.
  • Test charge(): verifies transaction is recorded.
  • Test get_total_revenue(): sums correctly, filters by currency, rounds.
  • Test integration: multiple charges correctly affect total revenue.
  • Achieve 100% logical branch coverage.
  • PASS/FAIL CRITERIA:

    • [PASS] All tests pass when executed (simulate execution).
    • [PASS] Negative amount test raises ValueError with correct message.
    • [PASS] Unsupported currency ("GBP") test raises ValueError.
    • [PASS] Mock gateway charge is called with correct arguments.
    • [PASS] get_total_revenue returns 150.0 for transactions [100.0, 50.0] in USD.
    ITERATION LOGIC:
    • Run the test suite conceptually. For any failing test, examine the assertion and fix the test or the understanding of the class.
    • Check coverage: ensure there's a test for the currency not in ["USD", "EUR"] branch. If missing, add it.
    • Verify the mock is asserted. If tests pass without checking gateway.charge was called, add the assertion.
    ` Explanation: Claude becomes a quality engineer, iterating until the test suite is exhaustive, passes, and validates both happy and error paths.

    ---

    6. Code Review Automation

    Use when: You want Claude to rigorously review a code diff for bugs, security, and style.
    `markdown RALPH PROMPT: Perform a code review on the following GitHub-style diff. Identify bugs, security issues, and style deviations.

    DIFF: `diff def user_login(request): if request.method == 'POST': username = request.POST.get('username') password = request.POST.get('password') user = User.objects.filter(username=username).first()

    • if user.password == password:
    + if user and user.check_password(password): login(request, user) return redirect('/dashboard') else: return render(request, 'login.html', {'error': 'Invalid credentials'}) return render(request, 'login.html') `

    ATOMIC TASKS:

  • Analyze the fix: does it correctly address the plain-text password vulnerability?
  • Identify a new bug introduced: what if user is None?
  • Check for other security issues (e.g., lack of rate limiting, information leakage).
  • Evaluate style: is the error message generic enough?
  • Suggest an additional improvement (e.g., using authenticate()).
  • Output a review checklist with [PASS]/[FAIL] items.
  • PASS/FAIL CRITERIA:

    • [PASS] The review identifies the potential AttributeError when user is None.
    • [PASS] The review confirms the fix properly uses check_password().
    • [PASS] The review suggests at least one additional security improvement.
    • [PASS] The review notes the error message is appropriately generic (doesn't reveal if user exists).
    • [PASS] The output is a structured checklist, not just prose.
    ITERATION LOGIC:
    • Examine the user.check_password(password) line. If the review doesn't note it's safe only if user exists, fail and re-analyze.
    • Check if the review suggests adding from django.contrib.auth import authenticate. If not, suggest it as an improvement.
    • Ensure the checklist format is used. If output is a paragraph, reformat into a checklist and re-evaluate criteria.
    ` Explanation: This turns Claude into an automated review bot, forcing it to apply a structured checklist and iterate until all review points are systematically covered.

    ---

    7. Bug Fix with Root Cause Analysis

    Use when: A bug is reported; you need a fix, not just a patch, with understood root cause.
    `markdown RALPH PROMPT: Diagnose and fix the bug in the merge_user_data function.

    BUG REPORT: "Function sometimes returns duplicate user IDs when merging lists."

    CODE: `python def merge_user_data(list_a, list_b): """Merge two lists of user dicts by 'id'.""" merged = list_a.copy() for user_b in list_b: if not any(user_a['id'] == user_b['id'] for user_a in list_a): merged.append(user_b) return merged `

    ATOMIC TASKS:

  • Reproduce the bug: create test inputs where list_a itself has duplicate IDs.
  • Identify the root cause: the logic only checks duplicates between lists, not within list_a.
  • Write a corrected version that ensures all IDs in the output are unique.
  • Preserve order: keep items from list_a first, then unique items from list_b.
  • Write 3 test cases: within-list duplicates, cross-list duplicates, and empty lists.
  • Propose a more efficient data structure (e.g., dictionary) for large lists.
  • PASS/FAIL CRITERIA:

    • [PASS] Corrected function returns no duplicate IDs for any input.
    • [PASS] Ordering rule is preserved (list_a items first).
    • [PASS] All 3 test cases pass.
    • [PASS] Root cause is clearly stated in one sentence.
    • [PASS] Suggested optimization uses a dict or set for O(n) performance.
    ITERATION LOGIC:
    • Test with list_a = [{'id': 1}, {'id': 1}]. If output contains duplicate id=1, the fix is insufficient; revise.
    • Verify order: input list_a = [{'id': 2}], list_b = [{'id': 1}] must output [{'id': 2}, {'id': 1}]. If reversed, fix.
    • Ensure the explanation of root cause is precise. If vague, refine it.
    ` Explanation: Claude must first prove it understands why the bug happens, then fix it completely, iterating on test cases until the output is guaranteed unique.

    ---

    8. Performance Optimization

    Use when: A script or function is functionally correct but unacceptably slow.
    `markdown RALPH PROMPT: Optimize the find_common_tags function for speed.

    ORIGINAL CODE: `python def find_common_tags(posts): """Find tags common to all posts.""" common_tags = [] first_post_tags = posts[0]['tags'] for tag in first_post_tags: tag_in_all = True for post in posts: if tag not in post['tags']: tag_in_all = False break if tag_in_all: common_tags.append(tag) return common_tags `

    ATOMIC TASKS:

  • Analyze time complexity: currently O(n*m) where n=tags in first post, m=posts.
  • Optimize by converting post['tags'] lists to sets for O(1) lookups.
  • Use set intersection operation to find common tags directly.
  • Handle edge case: empty posts list.
  • Write a benchmark comparison (original vs. optimized) using pseudo-timing.
  • Ensure result order is not required (set intersection january change order).
  • PASS/FAIL CRITERIA:

    • [PASS] Optimized function returns the same logical result as original.
    • [PASS] Code uses set.intersection() or equivalent.
    • [PASS] Edge case posts=[] is handled (return empty list or raise error).
    • [PASS] Time complexity is correctly stated as O(m*k) where k is avg tag count, but with much lower constant factors.
    • [PASS] Benchmark shows at least 10x speedup for large inputs (e.g., 1000 posts, 100 tags each).
    ITERATION LOGIC:
    • Test with sample data: posts = [{'tags':['a','b']}, {'tags':['a','c']}]. Result should be ['a']. If not, debug set logic.
    • Check empty list handling. If original code crashes on posts[0], the optimized version must handle it gracefully.
    • Verify the use of set. If still using nested loops with in on lists, fail and enforce set conversion.
    ` Explanation: Claude must not only rewrite the function but prove the optimization is correct and significantly faster, iterating until the algorithmic improvement is achieved.

    ---

    9. Security Audit

    Use when: Reviewing code for vulnerabilities (SQLi, XSS, auth flaws, etc.).
    `markdown RALPH PROMPT: Conduct a security audit on the following snippet of a Django view.

    CODE: `python import json from django.http import JsonResponse from django.db import connection

    def search_products(request): query = request.GET.get('q', '') category = request.GET.get('category', '') sql = f"SELECT * FROM products WHERE name LIKE '%{query}%'" if category: sql += f" AND category = '{category}'" with connection.cursor() as cursor: cursor.execute(sql) results = cursor.fetchall() return JsonResponse({'results': results}) `

    ATOMIC TASKS:

  • Identify the critical SQL Injection vulnerability.
  • Identify any additional issues (JSON serialization of raw tuples, lack of input sanitization).
  • Provide a fixed version using Django's ORM or parameterized queries.
  • Suggest protection against potential XSS in the JSON response if query/category were reflected.
  • Recommend a rate-limiting strategy for this endpoint.
  • Output a vulnerability report with severity (Critical, High, Medium).
  • PASS/FAIL CRITERIA:

    • [PASS] The audit identifies the SQLi via string interpolation as Critical.
    • [PASS] Fixed code uses cursor.execute(sql, [params]) or Django ORM.
    • [PASS] The report mentions the risk of exposing raw DB tuples (information disclosure).
    • [PASS] Suggests using json.dumps with a default serializer or a DRF serializer.
    • [PASS] At least one additional hardening recommendation (rate limiting, input validation) is provided.
    ITERATION LOGIC:
    • Check the fixed code: if it still uses f-string or .format() on the SQL string, fail and enforce parameterized queries.
    • Ensure the vulnerability report is structured. If it's a paragraph, reformat into a list with severity labels.
    • Verify the recommendation for JSON serialization addresses the fetchall() tuple issue. If not, add it.
    ` Explanation: Claude acts as a security analyst, required to find all issues and provide corrected code, iterating until the fix eliminates the critical vulnerability and addresses secondary concerns.

    ---

    10. Documentation Generation

    Use when: You have a module or API that needs comprehensive, ready-to-publish docs.
    `markdown RALPH PROMPT: Generate complete documentation for the StringUtils class.

    CLASS CODE: `python class StringUtils: @staticmethod def slugify(text, separator="-"): """Convert text to URL-safe slug.""" # ... implementation ...

    @staticmethod def truncate(text, length, suffix="..."): """Truncate text to given length, preserving words.""" # ... implementation ...

    @classmethod def is_palindrome(cls, text): """Check if text reads same forwards/backwards, ignoring case/punctuation.""" # ... implementation ... `

    ATOMIC TASKS:

  • Write an overview module docstring.
  • Document each method with Args, Returns, Raises, and Examples.
  • Include a "Quick Start" usage example.
  • Create a table of common use cases and which method to use.
  • Format for MkDocs or Sphinx compatibility (using Markdown).
  • Ensure all examples are copy-paste runnable in a Python shell.
  • PASS/FAIL CRITERIA:

    • [PASS] Each method docstring includes at least one working code example.
    • [PASS] The slugify example shows input "Hello World!" and output "hello-world".
    • [PASS] The truncate example demonstrates the suffix parameter.
    • [PASS] The is_palindrome example correctly handles "A man, a plan, a canal: Panama".
    • [PASS] The final output is a single, well-structured Markdown document.
    ITERATION LOGIC:
    • Test each code example by mentally executing it. If slugify("Hello World!") wouldn't produce the claimed output, correct the example or the understanding.
    • Check for a "Raises" section in truncate for negative length. If missing, add it.
    • Ensure the document has a clear table of contents via headers. If it's a wall of text, restructure with headers.
    ` Explanation: Claude iterates until the documentation is practical, example-driven, and accurate, serving as a reliable reference.

    ---

    11. Refactoring Legacy Code

    Use when: You have working but messy "legacy" code that needs cleaning without breaking it.
    `markdown RALPH PROMPT: Refactor the process_report function for readability and maintainability.

    LEGACY CODE: `python def process_report(data): r = [] for d in data: if d['status'] == 'active': if d['value'] > 0: v = d['value'] * 1.1 if d.get('discount'): v = v * 0.9 r.append({'id': d['id'], 'final': round(v,2)}) else: r.append({'id': d['id'], 'final': 0}) else: r.append({'id': d['id'], 'final': None}) return r `

    ATOMIC TASKS:

  • Extract the complex conditional logic for a single item into a helper function calculate_final.
  • Replace magic numbers (1.1, 0.9) with named constants.
  • Use a dictionary comprehension or map for the main loop if appropriate.
  • Improve variable names (r -> results, d -> item).
  • Add type hints.
  • Write a test to ensure output matches the original function exactly for 3 varied inputs.
  • PASS/FAIL CRITERIA:

    • [PASS] Refactored code produces identical output for all possible inputs.
    • [PASS] Magic numbers are replaced with TAX_MULTIPLIER = 1.1 and DISCOUNT_MULTIPLIER = 0.9.
    • [PASS] The helper function calculate_final is pure and testable.
    • [PASS] Type hints are added for function signature and helper.
    • [PASS] Code passes a sanity check with the provided test inputs.
    ITERATION LOGIC:
    • Run a mental diff between original and new outputs for edge cases: value=0, status='inactive', discount=True. If any differ, debug the helper function.
    • Check for constants. If numbers 1.1 and 0.9 remain in the logic, replace them.
    • Verify type hints. If missing for the data: list[dict] parameter, add them.
    ` Explanation: Claude must prove the refactor is behavior-preserving by designing tests, iterating until the new code is cleaner but functionally identical.

    ---

    12. Migration Script

    Use when: You need a one-time script to transform or migrate data safely.
    `markdown RALPH PROMPT: Write a data migration script to convert user profile JSON blob to relational tables.

    CONTEXT:

    • Old: users table has a profile column (JSON) with {"address": "...", "preferences": {"newsletter": true}}.
    • New: user_addresses and user_preferences tables with foreign key to users.id.
    • Script must: read all users, parse JSON, insert into new tables, handle missing keys, log errors, and be idempotent.
    ATOMIC TASKS:
  • Connect to database (use placeholders for credentials).
  • Select all id, profile from users.
  • For each row, safely parse JSON, using .get() for missing keys.
  • Insert into user_addresses (user_id, address_text).
  • Insert into user_preferences (user_id, newsletter_opt_in).
  • Add error handling: skip/log rows with invalid JSON.
  • Ensure script can be run multiple times without creating duplicates (use ON CONFLICT or check existence).
  • PASS/FAIL CRITERIA:

    • [PASS] Script logic handles missing preferences key gracefully (default to newsletter: false).
    • [PASS] Invalid JSON in a row logs error and continues processing other rows.
    • [PASS] Insert statements include conflict handling (e.g., ON CONFLICT (user_id) DO UPDATE).
    • [PASS] The script includes a dry-run mode that prints changes without committing.
    • [PASS] Code is structured as a function with a main() for testability.
    ITERATION LOGIC:
    • Test with a sample row {'profile': 'invalid-json'}. Script must catch JSONDecodeError and log.
    • Check conflict logic: if run twice, row count should not double. If duplicates possible, add conflict clause.
    • Verify default for missing newsletter key is False. If None, adjust the .get() default.
    ` Explanation: This prompt forces Claude to build a robust, fault-tolerant ETL script, iterating on error handling and idempotency until it's safe for production.

    ---

    13. CI/CD Pipeline Setup

    Use when: Setting up a GitHub Actions workflow for automated testing and deployment.
    `markdown RALPH PROMPT: Create a GitHub Actions workflow for a Python package.

    REQUIREMENTS:

    • Runs on push to main and pull requests to main.
    • Jobs: lint (black, flake8), test (pytest with coverage), build (create wheel).
    • Python versions: 3.9 and 3.10.
    • Cache pip dependencies for speed.
    • Upload coverage report to Codecov (simulate with echo).
    • Deploy to PyPI on tag push (use --dry-run for safety).
    ATOMIC TASKS:
  • Write the .github/workflows/ci.yml file structure.
  • Define the lint job with black --check . and flake8.
  • Define the test job with matrix strategy for Python versions, using pytest --cov.
  • Implement caching of ~/.cache/pip.
  • Add a step to "upload" coverage (simulate with echo "Coverage report would be sent").
  • Add a deploy job that triggers only on tags, runs twine upload --dry-run.
  • PASS/FAIL CRITERIA:

    • [PASS] YAML is syntactically valid (correct indentation, keys).
    • [PASS] test job runs on both Python 3.9 and 3.10 via matrix.
    • [PASS] deploy job has if: startsWith(github.ref, 'refs/tags/').
    • [PASS] Caching uses the actions/cache@v3 action with a key based on **/requirements.txt.
    • [PASS] All jobs are part of a single workflow and have descriptive names.
    ITERATION LOGIC:
    • Validate YAML structure: ensure jobs: is at correct level. If malformed, fix indentation.
    • Check the matrix strategy: if only one Python version is listed, add 3.10.
    • Verify the deploy job condition. If it runs on push to main, add the tag filter.
    ` Explanation: Claude builds a complete, configurable pipeline, iterating on YAML syntax and job dependencies until it meets all automation requirements.

    ---

    14. Docker Configuration

    Use when: Dockerizing a web application with multi-stage build and best practices.
    `markdown RALPH PROMPT: Write a production-ready Dockerfile for a Node.js + PostgreSQL app.

    SPECIFICATIONS:

    • App structure: package.json, server.js (API), client/ (React build).
    • Use multi-stage: build React in Node, then serve with Nginx.
    • Use node:18-alpine for build, nginx:alpine for final.
    • Set environment variables for NODE_ENV=production.
    • Expose port 80.
    • Ensure no sensitive files (.env, node_modules) are left in final image.
    • Include a .dockerignore file.
    ATOMIC TASKS:
  • Create .dockerignore with entries for node_modules, .git, .env.
  • Write first stage: install dependencies, build React app (npm run build).
  • Write second stage: copy built static files from first stage to Nginx HTML directory.
  • Copy custom nginx.conf if needed (assume default works).
  • Set non-root user for Nginx stage.
  • Expose port 80 and define the default command.
  • PASS/FAIL CRITERIA:

    • [PASS] .dockerignore includes at least 5 common ignore patterns.
    • [PASS] Multi-stage build is used, final image does not contain node or dev dependencies.
    • [PASS] The COPY command for React assets uses --from=builder syntax.
    • [PASS] Port 80 is exposed.
    • [PASS] The final stage runs as a non-root user (e.g., nginx user).
    ITERATION LOGIC:
    • Check the final image size: if the node layer is present, ensure COPY --from is correctly used.
    • Verify .dockerignore has Dockerfile and docker-compose.yml. If missing, add them.
    • Ensure the USER nginx directive is present. If the stage runs as root, add the user switch.
    ` Explanation: Claude iterates on Docker best practices—multi-stage, security, minimal size—until the configuration is production-optimized.

    ---

    15. API Client Library

    Use when: Creating a Python SDK for a REST API, with retries, error handling, and models.
    `markdown RALPH PROMPT: Design a Python client for a hypothetical "Todo API" (api.example.com).

    API SPEC:

    • Base URL: https://api.example.com/v1
    • Endpoints: GET /todos, POST /todos, GET /todos/{id}, PUT /todos/{id}, DELETE /todos/{id}
    • Authentication: API Key in X-API-Key header.
    • Models: Todo has id, title, completed, created_at.
    ATOMIC TASKS:
  • Define a Todo dataclass with type hints.
  • Create TodoClient class with __init__(api_key, base_url).
  • Implement get_todos() with error handling (raise HTTPError on 4xx/5xx).
  • Implement create_todo(title) that returns a Todo instance.
  • Add automatic retry logic for 429/503 statuses (use tenacity or manual retry).
  • Write a usage example showing list and create.
  • PASS/FAIL CRITERIA:

    • [PASS] Todo dataclass has fields id: int, title: str, completed: bool, created_at: datetime.
    • [PASS] All methods include the X-API-Key header.
    • [PASS] create_todo sends {"title": title} JSON and returns a Todo object.
    • [PASS] Retry logic is present for status codes 429 and 503 (max 3 retries).
    • [PASS] Example usage is correct and runnable (with placeholder API key).
    ITERATION LOGIC:
    • Verify the dataclass uses from __future__ import annotations or string types if needed. If datetime causes error, adjust import.
    • Check header setting: if requests.post(...) is called without headers, add them.
    • Test retry logic: ensure it's a simple for _ in range(3): try.... If missing, implement.
    ` Explanation: Claude builds a full-featured, robust client library, iterating until it includes proper error handling, retries, and a clean user-facing interface.

    ---

    These 15 Ralph prompt templates provide a complete framework for autonomous, high-quality code development. By enforcing atomic tasks with binary pass/fail criteria, they transform Claude from an assistant into a relentless engineer that iterates until every objective is met. Copy, paste, and adapt them for your specific projects to implement the Ralph Loop in your workflow.

    Ralph Prompts for Research & Analysis (12 Templates)

    Ralph prompts transform research and analysis from a subjective, open-ended exercise into a rigorous, verifiable process. By forcing Claude to define, test, and iterate against explicit criteria, these templates ensure outputs are comprehensive, accurate, and actionable—not just plausible-sounding summaries.

    *

    1. Competitive Analysis Ralph Prompt

    Use this template when you need a structured, feature-by-feature comparison of competitors to identify strategic advantages, gaps, and market positioning.
    `markdown # ATOMIC TASK: Generate a Comprehensive Competitive Analysis

    OBJECTIVE

    Analyze [Your Company/Product] and its direct competitors ([Competitor A], [Competitor B], [Competitor C]) to produce a feature comparison matrix, SWOT analysis for each, and strategic recommendations.

    ATOMIC TASKS & PASS/FAIL CRITERIA

  • Feature Matrix Creation
  • * PASS: Matrix includes at least 15 key product/service features. Each cell is populated with a verifiable fact (e.g., "Yes," "No," "Limited," "Paid Add-on"). Sources for each fact are cited in a footnote format. * FAIL: Features are vague, fewer than 15, or facts are unsourced/assumed.
  • Individual SWOT Analysis
  • * PASS: Each company has a dedicated SWOT with 3+ items per quadrant (Strengths, Weaknesses, Opportunities, Threats). Items are specific (e.g., "Strong brand recognition in the SME sector," not "good brand"). * FAIL: SWOTs are generic, imbalanced, or contain fewer than 3 items per quadrant.
  • Strategic Recommendation Synthesis
  • * PASS: Provides 3+ actionable recommendations for [Your Company/Product] based on the matrix and SWOT. Each recommendation explicitly ties to a discovered gap, weakness, or opportunity. * FAIL: Recommendations are generic (e.g., "improve marketing") or not directly derived from the preceding analysis.

    OUTPUT FORMAT

    Deliver a report with: 1) Introduction, 2) Feature Comparison Matrix, 3) Competitor SWOT Analyses, 4) Strategic Recommendations.

    ITERATION LOGIC

    Claude will first generate the analysis. It must then TEST its output against each Pass/Fail criterion. If the feature matrix lacks sources, it must re-research to find citations. If a SWOT is weak, it must deepen its analysis. It iterates by revisiting failed tasks, enhancing research, and refining the output until all criteria pass.
    ` *

    2. Market Research Ralph Prompt

    Use this template to define a new market's size, segmentation, customer demographics, and entry requirements with validated data.
    `markdown # ATOMIC TASK: Conduct Foundational Market Research

    OBJECTIVE

    Research the [Target Market, e.g., "North American Plant-Based Snack Food"] market to determine total addressable market (TAM), serviceable addressable market (SAM), key customer segments, buying drivers, and regulatory barriers.

    ATOMIC TASKS & PASS/FAIL CRITERIA

  • Market Sizing (TAM/SAM)
  • * PASS: Provides TAM and SAM figures in USD (or relevant currency). Each figure is sourced from a reputable market research firm, government data, or industry association report (cite source). Includes explanation of calculation methodology. * FAIL: Figures are unsourced, from low-quality blogs, or methodology is unclear.
  • Customer Persona Development
  • * PASS: Defines 3 distinct customer segments. For each, includes: demographic/psychographic profile, core need/pain point, primary buying criteria, and estimated segment size or percentage of SAM. * FAIL: Segments are not distinct (e.g., only differ by age), lack a defined need, or have no size estimate.
  • Entry Barrier Analysis
  • * PASS: Lists 4+ key market entry barriers (e.g., regulations, certifications, supply chain challenges, dominant competitors). For each, provides a brief explanation and a data point or source illustrating the barrier's impact. * FAIL: Lists generic barriers (e.g., "competition") without specific, research-backed details.

    OUTPUT FORMAT

    A report with: 1) Executive Summary, 2) Market Size & Growth, 3) Customer Segments, 4) Market Drivers & Trends, 5) Entry Barriers & Requirements.

    ITERATION LOGIC

    Claude will generate the report and then act as its own reviewer. It will check sources for market size—if a source is weak, it must find a better one. If customer segments are fuzzy, it must research further to define them sharply. Iteration continues until all data is verified and criteria are met.
    ` *

    3. Technical Research Report Ralph Prompt

    Use this template when evaluating technologies, frameworks, or architectures, requiring objective comparison based on performance, scalability, and community support.
    `markdown # ATOMIC TASK: Produce a Technical Options Analysis Report

    OBJECTIVE

    Research and compare [Technology Options, e.g., "React vs. Vue vs. Svelte for a dynamic dashboard"] to provide a data-driven recommendation based on defined technical and business constraints.

    ATOMIC TASKS & PASS/FAIL CRITERIA

  • Criteria-Based Comparison Table
  • * PASS: Table compares all options across 8+ criteria (e.g., learning curve, performance, bundle size, TypeScript support, job market). Each cell contains a specific, verifiable fact or metric (e.g., "Bundle size: ~30kb gzipped," "GitHub Stars: 220k"). * FAIL: Criteria are subjective ("ease of use"), cells contain opinions ("good"), or metrics are missing.
  • Benchmark/Evidence Compilation
  • * PASS: For at least 3 critical criteria (e.g., performance), includes links to or summaries of third-party benchmark studies, official documentation, or GitHub issue trends that support the facts in the table. * FAIL: Lacks external evidence; analysis is based solely on general knowledge.
  • Contextualized Recommendation
  • * PASS: Makes a clear recommendation for one option. Justification directly references the comparison table, weighs criteria according to project constraints [State Constraints: e.g., "team familiarity is highest priority"], and acknowledges trade-offs. * FAIL: Recommendation is wishy-washy, ignores stated constraints, or is not logically derived from the table.

    OUTPUT FORMAT

    Report with: 1) Introduction & Constraints, 2) Comparison Table, 3) Deep Dive on Key Criteria, 4) Recommendation & Implementation Notes.

    ITERATION LOGIC

    After drafting, Claude must validate every fact in the comparison table. If a metric like "time to interactive" is stated, it must find a source. If it cannot verify a fact, it must revise the table. It iterates by sourcing, correcting, and re-weighting analysis until the report is defensible and all criteria pass.
    ` *

    4. Literature Review Ralph Prompt

    Use this template to synthesize academic papers, articles, or key texts on a specific topic, identifying consensus, debates, and gaps in the existing research.
    `markdown # ATOMIC TASK: Synthesize a Scholarly Literature Review

    OBJECTIVE

    Review the key literature on [Research Topic, e.g., "Impact of Microplastics on Soil Microbiology"] to map the theoretical landscape, identify major findings and methodologies, and pinpoint research gaps.

    ATOMIC TASKS & PASS/FAIL CRITERIA

  • Thematic Organization
  • * PASS: Groups reviewed works into 4+ coherent thematic categories (e.g., "Detection Methods," "Ecological Impact Studies," "Degradation Pathways"). Each category is named and described with 2-3 sentences. * FAIL: Presents a simple list of summaries without thematic synthesis.
  • Source Synthesis & Citation
  • * PASS: Discusses at least 10 relevant scholarly sources (papers, books). For each thematic category, synthesizes findings from multiple sources, noting agreement or controversy. In-text citations are provided in a consistent format (e.g., (Author, Year)). * FAIL: Discusses fewer than 10 sources, or simply describes papers sequentially without synthesis.
  • Gap Identification
  • * PASS: Clearly identifies 3+ specific, justified gaps in the current research (e.g., "Long-term studies (>5 years) are lacking," "Most research focuses on marine environments, not terrestrial"). Each gap is logically derived from the reviewed literature. * FAIL: Gaps are vague ("more research is needed") or not based on the presented review.

    OUTPUT FORMAT

    A structured review with: 1) Introduction & Scope, 2) Thematic Synthesis, 3) Critical Analysis & Debates, 4) Identified Research Gaps, 5) Conclusion.

    ITERATION LOGIC

    Claude will write the review, then test it. Are there at least 10 sources properly synthesized? If not, it must research and incorporate more. Are the themes distinct, or do they overlap? It must reorganize. It iterates by deepening synthesis, improving citation support, and sharpening gap analysis until the output is academically robust.
    ` (Templates 5-12 follow the same comprehensive structure, with unique Atomic Tasks, Pass/Fail Criteria, and Iteration Logic tailored to each research type.)

    5. Data Analysis Ralph Prompt

    Use this template to guide Claude in cleaning, exploring, and interpreting a provided dataset to generate statistically sound insights and visualizations.
    `markdown # ATOMIC TASK: Analyze Dataset and Generate Insights Objective: Analyze the provided dataset [Dataset Name/Description] to answer the key question: [e.g., What are the primary factors influencing customer churn?]. Atomic Tasks:
  • Data Quality Report: Profile the data. PASS: Report lists column names, data types, % of missing values for each column, and identifies 3+ potential data quality issues (e.g., outliers, inconsistencies). FAIL: Report is incomplete or misses major issues.
  • Exploratory Analysis: Calculate descriptive statistics and create 3+ key visualizations (e.g., histograms, scatter plots, correlation heatmap). PASS: Visualizations are correctly labeled and directly relevant to the key question. Statistics are calculated accurately. FAIL: Visualizations are irrelevant or incorrectly generated.
  • Insight Synthesis: Derive 3+ actionable insights answering the key question. PASS: Each insight is directly supported by the analysis in Task 2 (e.g., "Feature X has a 0.8 correlation with churn, as shown in Fig 2"). FAIL: Insights are generic or not backed by the performed analysis.
  • Iteration: Claude will run its analysis, then check the data profile for errors, validate its calculations, and ensure every insight is explicitly linked to a visualization or statistic. It will re-analyze and refine until all criteria pass. When to Use: When you have raw data and need a systematic, verifiable analysis pipeline executed.
    `

    6. Trend Analysis Ralph Prompt

    Use this template to identify, validate, and project emerging trends from news, social data, and market reports.
    `markdown # ATOMIC TASK: Identify and Validate Emerging Trends Objective: Identify the top emerging trends in [Industry/Field, e.g., Remote Work Technology] over the past [Timeframe, e.g., 18 months] and assess their likely trajectory. Atomic Tasks:
  • Trend Identification & Sourcing: Identify 5+ candidate trends. PASS: Each trend is named (e.g., "Asynchronous Video Collaboration") and supported by 2+ recent (<6 months old) citations from reputable industry publications. FAIL: Trends are vague or lack recent, credible sources.
  • Momentum Validation: Quantify the momentum for each trend. PASS: For each trend, provides at least one quantitative indicator (e.g., search volume growth %, venture funding total, number of new product launches). FAIL: Discussion is purely qualitative with no hard metrics.
  • Trajectory Projection: Project the 12-month trajectory for each trend. PASS: For each trend, classifies trajectory as "Accelerating," "Plateauing," or "Decelerating" with a rationale based on the validation data and market forces. FAIL: Projections are guesses without a logical link to the analysis.
  • Iteration: Claude will find trends and sources, then verify the recency and credibility of each source. It will search for quantitative data to support each trend, refining or replacing trends that lack evidence. It loops until all trends are well-sourced and validated. When to Use: For strategic planning, content calendaring, or investment theses requiring evidence-based trend forecasting.
    `

    7. User Research Synthesis Ralph Prompt

    Use this template to transform raw user feedback (interviews, surveys, support tickets) into structured insights, personas, and journey maps.
    `markdown # ATOMIC TASK: Synthesize Raw User Feedback into Actionable Insights Objective: Synthesize the following user feedback [Paste feedback or data source] to identify key pain points, user goals, and opportunities for improvement. Atomic Tasks:
  • Affinity Clustering: Group feedback into thematic clusters. PASS: Creates 4-6 distinct clusters with clear labels (e.g., "Onboarding Confusion," "Pricing Transparency Issues"). Each cluster contains at least 3 data points (quotes, ticket excerpts). FAIL: Clusters are too broad ("Problems") or contain fewer than 3 data points.
  • Pain Point & Goal Extraction: Extract specific pains and goals. PASS: Lists 5+ specific pain points (e.g., "Users cannot find the export button") and 3+ user goals (e.g., "Quickly share reports with managers") derived directly from the clustered feedback. FAIL: Pains/goals are inferred, not directly quoted from the data.
  • Persona/Journey Sketch: Create a summary persona and journey stage. PASS: Creates a brief persona (name, role, core goal) and maps 1 key pain point to a specific stage in their journey (e.g., "At the 'Reporting' stage, Sam cannot export data"). FAIL: Persona is generic or pain is not mapped to a journey stage.
  • Iteration: Claude will cluster the data, then test by checking if each cluster label accurately reflects all data points within it. It will ensure every listed pain point has a direct quote backing it up. It will refine clusters and extracts until the synthesis is grounded 100% in the provided data. When to Use: After gathering qualitative user data to move from anecdotes to structured, actionable insights.
    `

    8. Financial Analysis Ralph Prompt

    Use this template to analyze financial statements, calculate key ratios, and assess the financial health and performance of a company.
    `markdown # ATOMIC TASK: Perform a Ratio-Based Financial Analysis Objective: Analyze the financial statements for [Company Name] for [Fiscal Year/Period] to assess profitability, liquidity, solvency, and efficiency. Atomic Tasks:
  • Key Ratio Calculation: Calculate 8+ standard financial ratios. PASS: Calculates at least 2 ratios from each category (Profitability, Liquidity, Solvency, Efficiency). Shows the formula used and the raw input numbers from the statements. FAIL: Ratios are miscalculated, or formulas/inputs are not shown.
  • Ratio Interpretation & Benchmarking: Interpret each ratio. PASS: For each ratio, provides a 1-sentence interpretation (e.g., "A current ratio of 1.5 indicates adequate short-term liquidity"). For 4+ key ratios, compares the result to industry average or prior period, citing the benchmark source. FAIL: Interpretation is missing or benchmarking is absent for major ratios.
  • Holistic Health Assessment: Provide an overall assessment. PASS: Gives a clear, balanced assessment (e.g., "Profitable but highly leveraged") supported by at least 3 ratio findings. Identifies 1-2 major financial risks or strengths. FAIL: Assessment is vague or contradicts the calculated ratio data.
  • Iteration: Claude will calculate ratios, then double-check every calculation against the formulas. It will verify benchmark data sources. If the interpretation doesn't match the number, it must correct it. It iterates until the math is flawless and the narrative is consistent with the numbers. When to Use: For evaluating investment opportunities, competitor financial health, or preparing for an audit.
    `

    9. Risk Assessment Ralph Prompt

    Use this template to systematically identify, evaluate, and prioritize risks for a project, decision, or strategy.
    `markdown # ATOMIC TASK: Conduct a Qualitative Risk Assessment Objective: Identify and prioritize key risks associated with [Project/Initiative, e.g., Launching Product X in Country Y]. Atomic Tasks:
  • Risk Identification: Brainstorm potential risks. PASS: Lists 10+ distinct risks across categories (Strategic, Operational, Financial, Compliance). Each risk is stated as a clear event (e.g., "Key supplier fails to deliver component on time"). FAIL: List is short, or risks are vague ("something goes wrong").
  • Risk Scoring: Score each risk on Likelihood and Impact. PASS: Uses a consistent 1-5 scale for Likelihood and Impact. Provides a 1-sentence justification for each score. FAIL: Scores are arbitrary or justifications are missing.
  • Prioritization & Mitigation: Prioritize and propose mitigations. PASS: Creates a 2x2 Risk Matrix (High/Low Likelihood vs Impact). For all "High-High" quadrant risks, proposes 1-2 specific mitigation actions. FAIL: No visual prioritization, or mitigations are generic ("have a backup plan").
  • Iteration: Claude will list and score risks, then test for consistency: Are similar risks scored similarly? Are justifications logical? It will re-evaluate scores, regroup risks, and sharpen mitigations until the assessment is internally consistent and actionable. When to Use: During project planning, strategic decision-making, or compliance preparation to proactively manage threats.
    `

    10. Industry Report Ralph Prompt

    Use this template to create a broad, executive-level overview of an industry's dynamics, key players, value chain, and future outlook.
    `markdown # ATOMIC TASK: Generate an Executive Industry Overview Objective: Provide a high-level overview of the [Industry Name] industry, covering its structure, competitive landscape, and critical success factors. Atomic Tasks:
  • Value Chain Mapping: Diagram the industry value chain. PASS: Identifies 5+ key stages from raw materials to end-user. Names 1-2 major activities or players at each stage. FAIL: Value chain is incomplete or misses critical stages.
  • Competitive Landscape Analysis: Map the key players. PASS: Identifies 3-5 market leaders and 2-3 disruptors/niche players. Characterizes the competitive intensity (e.g., fragmented, oligopoly) and basis of competition (e.g., price, innovation). FAIL: Fails to characterize the landscape or misses major player categories.
  • Critical Success Factor (CSF) Identification: List the CSFs. PASS: Lists 4-6 factors crucial for success in this industry (e.g., "Regulatory expertise," "Economies of scale in logistics"). Each CSF is justified with a brief explanation of why it matters. FAIL: CSFs are obvious ("good product") or unjustified.
  • Iteration: Claude will draft the report, then validate each part. Is the value chain logical and complete based on its research? Does the player list include recent disruptors? It will research to fill gaps, ensure the CSFs are industry-specific, and refine until the overview is comprehensive and insightful. When to Use: For onboarding new team members, exploring new markets, or preparing for an investment pitch.
    `

    11. Patent Research Ralph Prompt

    Use this template to search and analyze patent landscapes, focusing on claims, citations, and freedom-to-operate considerations.
    `markdown # ATOMIC TASK: Analyze Patent Landscape for a Technology Area Objective: Research patents related to [Specific Technology/Function, e.g., "solid-state battery electrolytes"] to understand key assignees, innovation trends, and potential freedom-to-operate concerns. Atomic Tasks:
  • Portfolio Identification & Summary: Identify key patents. PASS: Finds and lists 5+ relevant granted patents or recent applications. For each, provides patent number, assignee (company), priority date, and a one-line summary of the core claim. FAIL: List is incomplete, or summaries do not capture the claim's essence.
  • Assignee & Citation Analysis: Analyze the landscape. PASS: Identifies the top 3 assignees by portfolio size. Creates a simple citation map or notes a key patent that is highly cited. FAIL: Fails to identify leading players or citation patterns.
  • FTO Preliminary Flagging: Flag potential risks. PASS: Based on claim summaries, flags 1-2 patents whose claims appear broadest or most relevant to a hypothetical product doing [Brief Product Description]. Clearly states this is not legal advice. FAIL: Does not attempt to relate patents to the product concept.
  • Iteration: Claude will search patent databases, summarize claims, and then test its work: Are the summaries accurate to the independent claims? Are the top assignees correct based on the found set? It will refine searches, re-summarize, and re-analyze until the landscape is accurately and usefully depicted. When to Use: During early-stage R&D or product development to gauge competitive IP and innovation density. `

    12. Academic Paper Summary Ralph Prompt

    Use this template to deconstruct a complex academic paper into structured, plain-language summaries focusing on problem, method, result, and implication.
    `markdown # ATOMIC TASK: Create a Structured Summary of an Academic Paper Objective: Read the provided academic paper [Paper Title/Citation] and produce a structured summary for a non-specialist audience. Atomic Tasks:
  • Core Problem & Gap Extraction: Identify the research gap. PASS: Clearly states the broad problem field and the specific gap in knowledge the paper addresses, in one paragraph. Uses non-jargon language. FAIL: Restates the paper's abstract verbatim or uses excessive field-specific jargon.
  • Methodology Explanation in Plain Language: Explain the method. PASS: Describes the research method (e.g., experiment, survey, model) and key steps in simple terms. A layperson should understand what the researchers did. FAIL: Description is overly technical or omits key steps.
  • Findings & Significance Translation: Translate results and impact. PASS: Lists 2-3 key findings. Explains the significance: "This matters because..." linking it back to the core problem. Avoids simply repeating numerical results. FAIL: Just lists results without interpreting their importance.
  • Iteration: Claude will write the summary, then test it for clarity and accuracy. It will ask: "Would someone outside this field understand the gap and method?" It will replace jargon, simplify sentences, and ensure the 'significance' is genuinely explanatory. It iterates until the summary is both accurate and accessible. When to Use: To quickly grasp the essence of dense academic literature for application, reporting, or cross-disciplinary learning.
    `

    Ralph Prompts for Content Creation (12 Templates)

    Ralph prompts transform content creation from a single-pass draft into a rigorous, quality-assured process. These templates enforce structure, objective evaluation, and iterative refinement, ensuring Claude produces work that meets explicit standards, not just subjective "goodness."

    1. Blog Post Writing

    This prompt breaks down a blog post into structural and qualitative atomic tasks, ensuring the final piece is engaging, well-structured, and achieves its core purpose.
    `markdown RALPH PROMPT: Write a comprehensive, SEO-optimized blog post titled "[TARGET TITLE]" targeting the keyword "[PRIMARY KEYWORD]" for an audience of [TARGET AUDIENCE]. ATOMIC TASKS:
  • Task 1 - Outline & H2s: Create a detailed outline with at least 5 H2 sections that logically flow from introduction to conclusion. Each H2 must promise clear value.
  • Task 2 - Introduction (150 words): Write an introduction that hooks the reader, states the core problem/promise, and includes the primary keyword naturally.
  • Task 3 - Body Sections (300+ words each): For each H2, write a comprehensive section with a topic sentence, supporting points, examples/data, and a transition to the next section. Integrate secondary keywords.
  • Task 4 - Conclusion & CTA (100 words): Summarize key takeaways and provide a clear, compelling call-to-action.
  • Task 5 - SEO Elements: Craft a meta description (~155 chars) and 3-5 FAQ schema questions/answers based on the content.
  • PASS/FAIL CRITERIA:
    • PASS: Outline has logical flow; introduction hooks and states promise; each body section is substantive (>300 words) with examples; conclusion summarizes and has CTA; meta description is compelling and under 160 chars; FAQ schema is relevant.
    • FAIL: Outline is illogical; introduction is vague; any body section is thin (<250 words) or lacks examples; conclusion is missing CTA or is just a summary; SEO elements are missing or poorly written.
    ITERATION APPROACH: Claude must first output the outline (Task 1) and await confirmation or revision. Then, it will write the full post. After the first draft, Claude must self-test against all criteria. If any section fails (e.g., a body section is too short), Claude must diagnose why (e.g., "needs more concrete examples") and rewrite that specific atomic task until it passes.
    `

    2. Technical Documentation

    This prompt ensures documentation is accurate, sequentially logical, and usable for a target skill level, with mandatory verification steps.
    `markdown RALPH PROMPT: Create user documentation for the feature/process: "[FEATURE NAME]". The audience is [BEGINNER/INTERMEDIATE/EXPERT] users. ATOMIC TASKS:
  • Task 1 - Prerequisites & Overview: List all required tools, access, or knowledge. Provide a 3-sentence overview of what the documentation will achieve.
  • Task 2 - Step-by-Step Procedure: Break the process into numbered, sequential steps. Each step must be a single, unambiguous action.
  • Task 3 - Step Context & Screenshot Cues: For each step, add a sub-bullet explaining why the step is done and a note describing what a correct screenshot would show (e.g., "Screenshot: The modal window with 'Submit' button highlighted").
  • Task 4 - Troubleshooting Table: Create a table with columns: "Problem," "Likely Cause," "Solution" for 3-5 common errors.
  • Task 5 - Verification Task: End with a task for the user to verify success (e.g., "Run command X and confirm you see output Y").
  • PASS/FAIL CRITERIA:
    • PASS: Prerequisites are complete and accurate; steps are in perfect logical order and each is a single action; every step has context and a screenshot cue; troubleshooting table addresses realistic issues; verification task is concrete and testable.
    • FAIL: Missing prerequisite; steps can be misinterpreted or are out of order; any step lacks context/cue; troubleshooting is vague; no clear verification method.
    ITERATION APPROACH: Claude will write the draft. It must then role-play as a novice user and mentally execute the steps. If the mental simulation hits a snag (e.g., a missing piece of information), that corresponding atomic task fails and must be rewritten. The troubleshooting table must be cross-checked against the steps to ensure it addresses potential failure points in the procedure.
    `

    3. Email Sequence

    This prompt focuses on cohesion, persuasive logic, and clear CTAs across a sequence, treating each email as an interdependent atomic task.
    `markdown RALPH PROMPT: Write a [NUMBER]-email sequence for the campaign: "[CAMPAIGN GOAL]" targeting [AUDIENCE PERSONA]. The sequence should guide them from awareness to [DESIRED ACTION]. ATOMIC TASKS:
  • Task 1 - Sequence Map: Define the subject line, core goal, and CTA for each email in the sequence.
  • Task 2 - Email 1 (Awareness): Write a short, intriguing email that introduces a problem. CTA must be to read/learn more (e.g., click to article).
  • Task 3 - Email 2 (Value): Provide genuine, useful advice related to the problem. Build credibility. CTA is a softer engagement (e.g., reply with a question).
  • Task 4 - Email 3 (Offer): Present the solution/product as the answer to the problem introduced in Email 1. Overcome a key objection. CTA is the primary conversion (e.g., schedule, buy).
  • Task 5 - Email 4 (Nurture/Close): Write either a nurture email (more value) or a closing reminder email (urgency/scarcity). Include a final CTA.
  • PASS/FAIL CRITERIA:
    • PASS: Sequence map shows logical progression; each email has a distinct, appropriate goal; Email 1 hooks without selling; Email 2 provides tangible value; Email 3 directly addresses the problem from #1; each email has one clear, relevant CTA; tone is consistent.
    • FAIL: Sequence logic is broken (e.g., hard sell in Email 1); any email lacks a clear CTA or has multiple CTAs; Email 3 does not link back to Email 1's problem; tone shifts dramatically between emails.
    ITERATION APPROACH: Claude must first complete and validate the Sequence Map (Task 1). Then, it writes the emails in order. After the draft, Claude must read the sequence start-to-finish, checking for logical flow and CTA consistency. If Email 3's offer doesn't solve Email 1's problem, both tasks fail and must be reworked in tandem. Each email is an atomic task that can be individually iterated upon.
    `

    4. Landing Page Copy

    This prompt deconstructs high-conversion landing pages into core components, each with strict criteria for clarity and persuasion.
    `markdown RALPH PROMPT: Write copy for a landing page promoting [PRODUCT/SERVICE] designed to convert visitors seeking [SOLUTION TO PAIN POINT]. ATOMIC TASKS:
  • Task 1 - Hero Section: Create a main headline (≤10 words), a sub-headline (≤20 words), and a single, primary CTA button text. The headline must state the core benefit.
  • Task 2 - Pain Points & Solution: List 3-4 specific pain points the audience feels. Follow with 3-4 bullet points explaining how the product/service solves each.
  • Task 3 - Social Proof: Write two concise testimonials (1-2 sentences each) that speak to different benefits (e.g., ease of use, results).
  • Task 4 - Feature/Benefit Grid: Create a 3-column grid. Columns: "Feature," "How it Works (Brief)," "Benefit to You."
  • Task 5 - FAQ & Final CTA: Draft 3 FAQs that address key objections (price, complexity, timing). End with a final, repeated CTA section that includes a secondary, risk-reducing offer (e.g., free trial, money-back guarantee).
  • PASS/FAIL CRITERIA:
    • PASS: Hero headline is benefit-driven, not feature-driven; pain points are specific and resonate; solution bullets directly counter the pains; testimonials are believable and benefit-focused; feature grid clearly links features to user benefits; FAQs address real objections; CTAs are action-oriented.
    • FAIL: Headline is vague or feature-listed; pain points are generic; solution bullets don't map to pains; testimonials are weak; grid only lists features, not benefits; FAQs are trivial; CTA is passive ("Learn More" as primary).
    ITERATION APPROACH: Claude will output each section sequentially. After the first draft, it must review the page as a skeptical visitor. Does the hero section immediately communicate value? Do the pain points feel real? Does the feature grid explain "what's in it for me"? Any section failing this scrutiny is reworked. The connection between Task 2 (Pain/Solution) and Task 4 (Feature/Benefit) is critical and will be cross-verified.
    `

    5. Social Media Thread

    This prompt structures a viral-style thread, ensuring each post (tweet) has intrinsic value, builds momentum, and uses platform-specific mechanics.
    `markdown RALPH PROMPT: Write a Twitter/X thread of [NUMBER] posts about [THREAD TOPIC]. The goal is to provide value and encourage retweets/replies. ATOMIC TASKS:
  • Task 1 - Hook Post (Post 1): Write an opening post that states a surprising fact, asks a compelling question, or makes a bold promise. Must include relevant hashtags.
  • Task 2 - Value Posts (Posts 2-8): Each post must deliver one self-contained idea, tip, or insight. Use numbering (2/10, 3/10...). Alternate between statements, quick tips, and rhetorical questions.
  • Task 3 - Engagement Post (Middle Post): One post in the middle must explicitly ask a question to the audience to prompt replies.
  • Task 4 - Summary/Conclusion Post: Summarize the key takeaway or provide a final, impactful piece of advice.
  • Task 5 - CTA & Bridge Post (Final Post): Thank readers, ask for a retweet if they found it valuable, and link to a relevant resource (blog, newsletter, etc.).
  • PASS/FAIL CRITERIA:
    • PASS: Hook post is compelling enough to get a "read on"; each value post is understandable alone and contributes to the whole; engagement post has a clear question; summary post is concise and valuable; final post has a soft CTA. Total character count for each post is <280.
    • FAIL: Hook post is boring or clickbaity without substance; any value post is vague or off-topic; missing engagement post; summary post just repeats the hook; final post is only a hard sell.
    ITERATION APPROACH: Claude will write the thread in order. It must then read it post-by-post, simulating a user's feed. Does each post make you want to read the next? Does any post feel like filler? If a value post fails, it is rewritten. The thread's flow is an overarching criterion; if the momentum breaks, multiple consecutive posts january need revision.
    `

    6. Video Script

    This prompt ensures a video script is timed, visually cued, and structured for retention, treating each segment as a timed atomic task.
    `markdown RALPH PROMPT: Write a [DURATION]-minute explainer video script for [TOPIC]. Style: [CONVERSATIONAL/PROFESSIONAL/ENERGETIC]. ATOMIC TASKS:
  • Task 1 - Hook (0:00-0:30): Script a 30-second opening that states the viewer's problem and promises a solution. Include on-screen text cues for key words.
  • Task 2 - Core Explanation (0:30-2:00): Break down the main concept into 3 key parts. For each part, write voiceover and describe corresponding visuals/B-roll. Include a script for an on-screen graphic or diagram.
  • Task 3 - Practical Example/Walkthrough (2:00-3:30): Script a step-by-step walkthrough of applying the concept. Use "You" language. Describe screen recordings or animations.
  • Task 4 - Recap & Key Takeaway (3:30-4:00): Visually recap the 3 key parts on screen while voiceover reinforces them. State the single most important takeaway.
  • Task 5 - Outro & CTA (4:00-4:30): Thank viewer, direct to description for links, and prompt subscription/like. Include lower-third graphic cue.
  • PASS/FAIL CRITERIA:
    • PASS: Hook is under 30 sec and creates curiosity; core explanation is divided into 3 clear parts with strong visual cues; walkthrough is actionable and uses "you"; recap visually reinforces key parts; outro has clear CTA. Total script length matches ~150 words per minute target.
    • FAIL: Hook runs long or is vague; core explanation is monolithic without visual planning; walkthrough is theoretical; recap is only voice, no visual; missing CTA.
    ITERATION APPROACH: Claude will write the script with timestamps. It must then perform a time-check: reading the script aloud and timing each section. Any section running significantly over its time allocation fails and must be condensed. Additionally, for each section, the visual cue description is mandatory; a section lacking a visual plan fails and must be rethought.
    `

    7. Case Study

    This prompt enforces a data-driven, problem-solution-result narrative structure, demanding concrete evidence at each stage.
    `markdown RALPH PROMPT: Write a case study for [CLIENT/PROJECT] showcasing how [YOUR SOLUTION] solved [SPECIFIC CHALLENGE]. ATOMIC TASKS:
  • Task 1 - Executive Summary: Write a 100-word summary stating the client, their challenge, the solution implemented, and the quantitative results achieved.
  • Task 2 - The Challenge: Detail the client's initial situation. Include 2-3 specific, quantifiable pain points (e.g., "20 hours/week spent on manual reporting").
  • Task 3 - The Solution: Describe what was implemented. List specific features/processes used. Include a quote from the client (simulated) about the selection/implementation experience.
  • Task 4 - The Results: Present outcomes using metrics. Use bullet points with before/after comparisons (e.g., "Reduced processing time: 20 hrs → 2 hrs"). Attribute a result to a specific solution component from Task 3.
  • Task 5 - Conclusion & Implication: State the broader implication for the client's business. End with a generic CTA (e.g., "Could your business achieve similar results?").
  • PASS/FAIL CRITERIA:
    • PASS: Summary contains all four elements (client, challenge, solution, results); challenge has quantifiable metrics; solution details specific actions; results are quantitative and linked to solution components; conclusion looks forward.
    • FAIL: Summary is vague; challenge described only qualitatively; solution is a generic description; results are fluffy ("increased happiness"); no link between solution and result.
    ITERATION APPROACH: Claude writes the case study. The primary test is the link between Task 3 (Solution) and Task 4 (Results). For each result bullet point, Claude must identify which part of the solution caused it. If this link cannot be made, either the result is too vague or the solution is inadequately described, causing one or both tasks to fail and require revision with more concrete details.
    `

    8. Product Description

    This prompt moves beyond feature listing to craft benefit-driven, sensory, and objection-handling copy for an e-commerce context.
    `markdown RALPH PROMPT: Write a product description for [PRODUCT NAME], a [PRODUCT CATEGORY] that helps [TARGET USER] achieve [KEY BENEFIT]. ATOMIC TASKS:
  • Task 1 - Benefit-Driven Headline & Intro: Write a headline focusing on the primary user benefit. Follow with a 2-3 sentence intro that sets the scene and invokes a sensory or emotional desire.
  • Task 2 - Key Features → Benefits: List 3-5 key product features. For EACH, write 1-2 sentences translating it into a user benefit, using "you" language. (e.g., Feature: "500-thread count cotton." Benefit: "You experience hotel-grade luxury and comfort every night.")
  • Task 3 - Usage Scenario/Visual: Describe a short, specific scenario of someone using the product and enjoying the result. (e.g., "Imagine waking up feeling truly rested...").
  • Task 4 - Specifications & Details: Create a bulleted list of technical specs, dimensions, materials, and included items. This must be comprehensive and clear.
  • Task 5 - Social Proof & Guarantee: Add a line integrating social proof ("Join 10,000+ satisfied customers"). State the warranty/guarantee prominently.
  • PASS/FAIL CRITERIA:
    • PASS: Headline is benefit-focused, not just the product name; every feature has a clear benefit translation; usage scenario is vivid and relatable; specs are complete and unambiguous; social proof/guarantee is present.
    • FAIL: Headline is generic; features are listed without benefit translation; scenario is missing or generic; specs are incomplete; no trust elements.
    ITERATION APPROACH: After drafting, Claude must critically review Task 2. For each feature, it must ask: "So what?" If the benefit statement doesn't compellingly answer that question, that specific feature-benefit pair fails and must be rewritten. The description must pass a "scan test": a quick skim should reveal clear benefits, not just technical features.
    `

    9. Newsletter

    This prompt balances curation, original insight, and engagement in a standard newsletter format, with each section serving a distinct purpose.
    `markdown RALPH PROMPT: Write an issue for the newsletter "[NEWSLETTER NAME]" focused on [THEME/TOPIC]. The audience is [AUDIENCE DESCRIPTION]. ATOMIC TASKS:
  • Task 1 - Subject Line & Preheader: Write 3 options for an engaging subject line and a preheader text (≈40 chars) that complements it.
  • Task 2 - Opening Personal Note (2-3 sentences): Write a brief, personal intro from the editor linking the theme to a recent observation or experience.
  • Task 3 - Main Curated Item: Summarize one piece of external content (article, tool, podcast). Include: a link, a 2-sentence summary, and 1-2 sentences of original commentary/insight on why it matters.
  • Task 4 - Original Tip/Insight: Share one original, actionable piece of advice or a quick analysis related to the theme. Use bullet points or a very short paragraph.
  • Task 5 - Reader Engagement & CTA: Pose a question to readers to prompt replies. End with a clear, simple CTA (e.g., "Forward to a colleague," "Check out our guide").
  • PASS/FAIL CRITERIA:
    • PASS: Subject line options are compelling; personal note feels genuine and relevant; curated item has summary AND original commentary; original tip is truly actionable; engagement question is open-ended; CTA is singular and clear.
    • FAIL: Subject lines are bland; personal note is generic or missing; curated item is just a link and summary without commentary; original tip is vague/common knowledge; no engagement question; multiple or confusing CTAs.
    ITERATION APPROACH: Claude will draft the newsletter. The critical test is the value-add in Task 3 and Task 4. The curated item must pass the "so what?" test—the commentary must provide context the summary doesn't. The original tip must be non-obvious to the stated audience. If either section provides only surface-level information, it fails and requires deeper insight.
    `

    10. White Paper

    This prompt structures a formal, evidence-based document, mandating logical argumentation, data integration, and professional formatting.
    `markdown RALPH PROMPT: Write a white paper titled "[TITLE]" that makes the case for [CORE THESIS] aimed at [EXECUTIVES/TECHNICAL DECISION MAKERS]. ATOMIC TASKS:
  • Task 1 - Abstract (200 words): Summarize the problem, methodology, key findings, and business implications.
  • Task 2 - Introduction & Problem Statement: Define the industry landscape and articulate the specific, costly problem being addressed. Use data if possible.
  • Task 3 - Core Analysis/Argument (3 Sections): Create three H2 sections, each presenting a pillar of your argument. Each section must contain: a claim, supporting data/example, and a graph/figure concept described in text (e.g., "Figure 1: Chart showing correlation between X and Y").
  • Task 4 - Case Study Snapshot: Include a brief, 2-paragraph embedded case study illustrating the thesis in practice with quantifiable outcomes.
  • Task 5 - Conclusion & Recommendations: Restate the thesis in light of the analysis. Provide 3-5 actionable recommendations for the reader. Add an "About the Author/Company" boilerplate.
  • PASS/FAIL CRITERIA:
    • PASS: Abstract contains all required elements; problem statement is specific and data-informed; each analysis section has claim, support, and figure concept; case study has quantifiable results; recommendations are actionable and derived from the analysis.
    • FAIL: Abstract is incomplete; problem is vague; any analysis section is opinion without support; case study is anecdotal; recommendations are generic.
    ITERATION APPROACH: Claude must first create a detailed outline for Task 3 (Core Analysis). This outline will be verified for logical flow. The white paper will be written section by section. Each analysis section (Task 3) is an atomic task that must pass the "evidence test." If a claim is made without supporting data or a logical example, that entire section fails and must be researched/rewritten. The recommendations must be directly tied to the preceding analysis.
    `

    11. Press Release

    This prompt adheres to the strict AP-style format, ensuring newsworthiness, proper quoting, and essential informational components are present.
    `markdown RALPH PROMPT: Write a press release announcing [NEWS EVENT: e.g., Product Launch, Partnership, Funding Round]. ATOMIC TASKS:
  • Task 1 - Headline & Dateline: Write a compelling, news-style headline. Include a dateline (e.g., "SAN FRANCISCO, CA – Month Day, Year –").
  • Task 2 - Lead Paragraph: Write the first paragraph (≈35 words) answering: Who, What, When, Where, and Why (the significance).
  • Task 3 - Body & Quote: Elaborate on the news with 2-3 paragraphs of detail. Include a quote from a relevant executive (CEO, Founder, etc.) that adds color or vision, not just repeats facts.
  • Task 4 - Boilerplate & About: Write the standard "About [Company]" paragraph describing the company's mission and scope.
  • Task 5 - Contact Information & "###": Provide the standard "For media inquiries, contact:" line with name, email, and phone. End the release with "###" centered on its own line.
  • PASS/FAIL CRITERIA:
    • PASS: Headline is newsworthy; lead paragraph concisely covers 5 Ws; body provides necessary context; quote is insightful and humanizing; boilerplate is standard; contact info and end mark are present.
    • FAIL: Headline is promotional ("Amazing New Product!"); lead paragraph misses a W or is too long; body is just fluff; quote is robotic or redundant; missing boilerplate or contact info; no "###".
    ITERATION APPROACH: Claude will write the release. The most critical test is the lead paragraph (Task 2). Claude must extract the Who, What, When, Where, and Why from its own draft. If any element is missing or unclear, Task 2 fails and must be rewritten with stricter concision. The quote (part of Task 3) is also a key pass/fail item; it must sound like a human speaking, not corporate jargon.
    `

    12. SEO Content (Pillar Page)

    This prompt builds a comprehensive, interlinked pillar page designed to rank for a core topic and cluster keywords.
    `markdown RALPH PROMPT: Create a pillar page for the core topic "[CORE TOPIC]" targeting the primary keyword "[PRIMARY KW]" and incorporating cluster keywords: [KW1, KW2, KW3]. ATOMIC TASKS:
  • Task 1 - Comprehensive Outline: Create an outline with H1 (Primary KW), at least 5 H2s covering subtopics, and multiple H3s under each. Map cluster keywords to specific H2/H3 sections.
  • Task 2 - Introduction with Semantic Keywords: Write a 200-word introduction defining the core topic, its importance, and using the primary KW and 2-3 semantic variations naturally.
  • Task 3 - Detailed H2 Sections (400+ words each): Each H2 section must thoroughly explain a subtopic. Include at least one data point, example, or statistic. Naturally integrate assigned cluster keywords.
  • Task 4 - Internal Linking Plan: Within the body, add 3-5 contextual internal links to relevant blog posts or resources, using descriptive anchor text.
  • Task 5 - Conclusion & Next Steps: Summarize the key lessons from each H2 section. Provide a "Next Steps" section suggesting 3 logical actions for the reader (e.g., read a specific guide, use a tool, implement a tactic).
  • PASS/FAIL CRITERIA:
    • PASS: Outline is logically exhaustive; introduction defines topic and uses keywords naturally; each H2 section is substantive (>400 words) with evidence; internal links are contextual and relevant; conclusion synthesizes H2s and provides clear next steps.
    • FAIL: Outline is shallow; introduction is keyword-stuffed or vague; any H2 section is thin (<300 words) or lacks concrete detail; internal links are forced or missing; conclusion is just a repeat.
    ITERATION APPROACH: Claude must first submit and validate the outline (Task 1), ensuring cluster keywords are properly mapped. After writing, Claude will audit the content for keyword integration—it must be natural, not forced. Each H2 section is an atomic task; if one lacks depth or a concrete example, it fails individually. The internal linking plan is also tested; links must add value to the reader's journey, not just exist.
    `

    Ralph Prompts for Business & Planning (12 Templates)

    Ralph prompts transform business and planning from subjective, open-ended tasks into rigorous, verifiable processes. By applying the Ralph Loop—breaking work into atomic tasks with explicit pass/fail criteria—you ensure outputs are complete, consistent, and ready for execution, not just "good enough" first drafts.

    1. Business Plan Section Template

    When to Use: For drafting any core section of a business plan (Executive Summary, Marketing, Operations, Financials) where completeness and data alignment are critical.
    `markdown ACT AS a business plan consultant. Your task is to draft the [INSERT SECTION NAME, e.g., "Marketing Strategy"] section for a business plan for [BUSINESS DESCRIPTION].

    RALPH LOOP INITIATED. You will work through these ATOMIC TASKS:

  • Outline Structure: List all required sub-sections for a professional [SECTION NAME].
  • Draft Content: Write full content for each sub-section from Task 1, incorporating specific data: [INSERT KEY DATA POINTS, e.g., target customer demographics, budget constraints, competitive names].
  • Internal Consistency Check: Verify that all numbers, claims, and strategies align across all sub-sections and with this business's core offering: [CORE OFFERING].
  • Clarity & Actionability Review: Ensure every proposed strategy includes a clear "how" and "next step."
  • PASS/FAIL CRITERIA:

    • PASS: All sub-sections from Task 1 are present and fully addressed.
    • PASS: All provided data points ([LIST THEM]) are accurately incorporated.
    • PASS: No contradictions exist between sub-sections regarding budget, timeline, or capabilities.
    • PASS: An external reader could execute one proposed tactic based solely on your description.
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, diagnose which criteria failed. Revise the draft to address the specific failure points. Re-test against all criteria. Do not proceed until all pass.
    `

    2. Project Plan Template

    When to Use: To create a detailed, executable project plan with clear dependencies and deliverables, ensuring no phase is ambiguous.
    `markdown ACT AS a project manager. Create a comprehensive project plan for: [PROJECT NAME & GOAL].

    RALPH LOOP INITIATED. Execute these ATOMIC TASKS:

  • Phase Breakdown: Divide the project into sequential, major phases (e.g., Initiation, Planning, Execution, Closure).
  • Task Listing: Under each phase, list every discrete task required. Format as "Verb + Deliverable" (e.g., "Draft stakeholder requirements document").
  • Assignment & Timing: Assign an owner (use role: e.g., "Tech Lead") and a realistic working-day duration to each task.
  • Dependency Mapping: For each task, list the task ID(s) that must be complete before it can start.
  • Milestone Definition: Identify key deliverable milestones and their verification method (e.g., "Milestone: MVP Deployed. Verified by: Successful smoke test.").
  • PASS/FAIL CRITERIA:

    • PASS: Every task has a clear owner (role) and duration.
    • PASS: Every task has explicit dependencies listed or is marked as "Phase Start."
    • PASS: The critical path can be identified from start to end.
    • PASS: Milestones are testable/verifiable events, not just dates.
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, identify the faulty phase or task group. Redesign the task sequence and dependencies to resolve the failure. Re-run the check. Iterate until the plan is logically watertight.
    `

    3. Strategic Analysis Template

    When to Use: To move beyond surface-level analysis and generate deep, evidence-based strategic insights with clear implications.
    `markdown ACT AS a chief strategy officer. Perform a strategic analysis of [TOPIC, e.g., "entering the German SaaS market"].

    RALPH LOOP INITIATED. Complete these ATOMIC TASKS:

  • Factor Identification: List the 5-7 most critical macro and micro factors influencing the topic (e.g., regulatory environment, competitor density, talent availability).
  • Evidence Gathering: For each factor, cite at least one specific, verifiable piece of data or trend (provide if known: [INSERT ANY KNOWN DATA]).
  • Impact Assessment: Rate the impact of each factor on our objective [STATE OBJECTIVE] as High/Medium/Low and justify the rating in one sentence.
  • Implication Synthesis: Based on the high-impact factors, derive 3-5 actionable strategic implications for our company.
  • PASS/FAIL CRITERIA:

    • PASS: Each factor is distinct and not a subset of another.
    • PASS: Every factor has associated evidence (data point, trend, credible source mention).
    • PASS: All High-impact ratings have a logical justification tied to the objective.
    • PASS: Implications are direct "therefore, we should..." statements, not observations.
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, revisit the factor list. Consolidate or split factors, strengthen evidence, or sharpen implications until the analysis is dense, evidence-backed, and directive.
    `

    4. SWOT Analysis Template

    When to Use: To force a balanced, honest, and useful SWOT analysis that avoids generic points and links directly to strategy.
    `markdown ACT AS a management consultant. Conduct a rigorous SWOT analysis for [COMPANY/PROJECT NAME] in the context of [GOAL, e.g., "launching Product X"].

    RALPH LOOP INITIATED. Execute these ATOMIC TASKS:

  • Quadrant Population: Generate 3-4 items for each quadrant (Strengths, Weaknesses, Opportunities, Threats).
  • Specificity Enforcement: Rewrite each item to be specific and factual (e.g., not "good team," but "team has 3 engineers with 10+ years in fintech API development").
  • Cross-Linkage: For each Strength, identify an Opportunity it can exploit. For each Weakness, identify a Threat that could exacerbate it.
  • Strategic Question Formulation: Convert the top 2 items from each quadrant into a strategic question (e.g., "How can we leverage [Strength A] to capture [Opportunity B]?").
  • PASS/FAIL CRITERIA:

    • PASS: No item appears in more than one quadrant (e.g., something cannot be both a Strength and a Weakness).
    • PASS: All items are specific, avoiding generic terms like "good," "bad," "growing market."
    • PASS: Every Strength and Weakness has at least one cross-linkage.
    • PASS: Strategic questions are open-ended and require decision-making to answer.
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, identify the vague or misplaced items. Refine wording, reassign items to correct quadrants, and ensure cross-linkages are logical. Re-test.
    `

    5. OKR Development Template

    When to Use: To create Objectives and Key Results that are ambitious, measurable, and aligned, preventing vague or unattainable goals.
    `markdown ACT AS an OKR coach. Develop a set of Q[QUARTER NUMBER] OKRs for the [TEAM/DEPARTMENT NAME] focused on [BROAD THEME, e.g., "market expansion"].

    RALPH LOOP INITIATED. Complete these ATOMIC TASKS:

  • Objective Drafting: Write 3 inspirational, qualitative Objectives (O). Each must start with a verb and describe a desired outcome.
  • Key Result Crafting: For each Objective, draft 3 measurable Key Results (KRs). Each KR must be quantifiable (%, $, #, binary yes/no) and have a target value.
  • Alignment Check: Verify each KR directly measures progress toward its parent Objective. If removed, the Objective's progress would be unclear.
  • Health Assessment: Label each KR as either "Committed" (must be fully achieved) or "Aspirational" (stretch goal). Ensure at least 2/3 are Aspirational.
  • PASS/FAIL CRITERIA:

    • PASS: All Objectives are qualitative and inspirational (no numbers).
    • PASS: All Key Results are quantitative and have a clear target (e.g., "Increase NPS from 30 to 40").
    • PASS: No KR is an activity/task (e.g., "Launch campaign"), but an outcome/result (e.g., "Acquire 1000 leads from campaign").
    • PASS: The set has a mix of Committed and Aspirational KRs as defined.
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, convert any activity-based KRs to outcome-based. Adjust targets to be measurable. Rewrite vague Objectives. Re-run the alignment and health checks.
    `

    6. Meeting Agenda Creation Template

    When to Use: To design a meeting agenda that guarantees a productive, decision-oriented meeting with clear outcomes and ownership.
    `markdown ACT AS an executive assistant. Create a 60-minute meeting agenda for a meeting titled: "[MEETING TITLE]".

    RALPH LOOP INITIATED. Execute these ATOMIC TASKS:

  • Outcome Definition: State the single, concrete decision or output required by the end of the meeting (e.g., "Approve Q3 budget draft," "Finalize list of top 3 candidate features").
  • Topic Sequencing: List discussion topics in the optimal order to reach the outcome. Allocate strict minutes to each.
  • Owner Assignment: For each topic, assign a discussion lead (role or name).
  • Pre-Work Specification: List any documents, data, or pre-reading required from attendees for each topic.
  • Success Metrics: Define how each topic will be considered "complete" (e.g., "Topic complete when action owner is assigned and deadline is set").
  • PASS/FAIL CRITERIA:

    • PASS: The final meeting outcome is a concrete, actionable artifact or decision.
    • PASS: The sum of all topic timings equals 50 minutes (leaving 10 for buffer/admin).
    • PASS: Every topic has a designated lead.
    • PASS: The "completion" metric for each topic is not "discussed," but "decided" or "assigned."
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, tighten the outcome statement. Rebalance timings. Replace vague completion metrics with specific ones. Ensure the agenda is a decision-making engine, not a talking schedule.
    `

    7. Proposal Writing Template

    When to Use: To draft client or stakeholder proposals that are compelling, tailored, and address all potential objections within the structure.
    `markdown ACT AS a proposal writer. Draft a proposal for [CLIENT/PROJECT NAME] to [PROPOSAL GOAL, e.g., "secure funding for Phase 2"].

    RALPH LOOP INITIATED. Complete these ATOMIC TASKS:

  • Problem Restatement: In the client's voice, summarize the problem/opportunity you are solving for them.
  • Solution Blueprint: Describe your proposed solution in clear, benefit-oriented steps. Use their terminology: [INSERT CLIENT TERMS].
  • Value Quantification: Where possible, quantify the value (ROI, time saved, revenue increase) of your solution. Use provided data: [INSERT DATA].
  • Objection Handling: Pre-emptively address the top 3 likely objections (e.g., cost, timeline, expertise) with rebuttals.
  • Clear Call to Action: State the exact next step the reader must take (e.g., "Sign the attached SOW by Friday," "Schedule a technical deep-dive").
  • PASS/FAIL CRITERIA:

    • PASS: The problem statement uses the client's likely wording, not just yours.
    • PASS: The solution section explicitly links each step to a client benefit.
    • PASS: At least one value metric is quantified (even if estimated).
    • PASS: Objections are addressed proactively, not just listed.
    • PASS: The call to action is a single, simple, concrete step.
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, strengthen the client-centric language. Add missing quantifications. Develop stronger rebuttals. Sharpen the call to action. Iterate until the proposal is persuasive and frictionless.
    `

    8. Budget Planning Template

    When to Use: To build a detailed, justifiable, and traceable budget that accounts for both costs and required approvals.
    `markdown ACT AS a financial planner. Create a line-item budget for [PROJECT/INITIATIVE NAME] for [TIMEFRAME].

    RALPH LOOP INITIATED. Execute these ATOMIC TASKS:

  • Category Definition: Establish budget categories (e.g., Personnel, Software, Marketing, Contingency).
  • Line-Item Creation: List every specific expense under each category. Include description, quantity, unit cost, and total.
  • Justification & Source: For any non-obvious line item, add a one-sentence justification and source for the cost estimate (e.g., "Quote from Vendor X dated...").
  • Subtotal & Total: Calculate subtotals per category and a grand total.
  • Approval Flagging: Mark any line item that requires special approval beyond the overall budget (e.g., "Requires CTO approval").
  • PASS/FAIL CRITERIA:

    • PASS: Every line item has a quantity, unit cost, and total.
    • PASS: The sum of all line-item totals matches the calculated grand total.
    • PASS: Any cost estimate over [SET THRESHOLD, e.g., $5000] has a justification/source noted.
    • PASS: Contingency is included as a category (typically 10-15%).
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, track down calculation mismatches. Add missing justifications. Ensure categories are MECE (Mutually Exclusive, Collectively Exhaustive). The final budget must be auditable and defensible.
    `

    9. Process Documentation Template

    When to Use: To document a business process so clearly that a new hire could execute it correctly without prior training.
    `markdown ACT AS a process analyst. Document the end-to-end process for: [PROCESS NAME, e.g., "Monthly Financial Close"].

    RALPH LOOP INITIATED. Complete these ATOMIC TASKS:

  • Trigger & Outcome: Define what starts the process (the trigger) and what defines its successful completion (the outcome).
  • Step-by-Step Sequence: List every step in order. Format as "[Role] does [Action] using [Tool/Input] to produce [Output]."
  • Decision Point Mapping: At each step where a decision is made (e.g., "If value > X, then..."), document all possible branches.
  • Error Handling: For the 3 most common points of failure, describe the remediation procedure.
  • Handoff Clarity: Explicitly state how the output of one step is delivered to the next role (e.g., "Email report to...," "Update ticket status in Jira to 'Ready for Review'").
  • PASS/FAIL CRITERIA:

    • PASS: Every step follows the "[Role] does [Action]..." format.
    • PASS: All decision branches are covered, leaving no "what if?" unanswered.
    • PASS: Error handling procedures are actionable, not just "notify manager."
    • PASS: A new person in each role could identify their handoff responsibilities.
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, walk through the process mentally. Identify ambiguous steps, missing decisions, or vague handoffs. Rewrite for robotic clarity. Test by asking "Could this step be misinterpreted?"
    `

    10. Training Material Template

    When to Use: To create training guides or modules that are structured for comprehension, retention, and practical application.
    `markdown ACT AS a learning & development specialist. Create training material for [TOPIC/SKILL] for an audience of [AUDIENCE ROLE].

    RALPH LOOP INITIATED. Execute these ATOMIC TASKS:

  • Learning Objective Definition: Write 3-5 terminal learning objectives in the format: "By the end, you will be able to [perform specific action]."
  • Content Chunking: Break the topic into logical modules. For each, provide: a) Concept Explanation, b) Real-World Example, c) Common Pitfall.
  • Knowledge Check Creation: Draft 2-3 multiple-choice or short-answer questions per module that test application, not just recall.
  • Practice Task Design: Create one realistic "hands-on" task for the learner to complete using the material (e.g., "Using the template, draft a sample...").
  • Resource Listing: List links, tools, or job aids the learner will need during and after training.
  • PASS/FAIL CRITERIA:

    • PASS: All learning objectives are observable, measurable actions.
    • PASS: Every module contains the "Explanation, Example, Pitfall" triad.
    • PASS: Knowledge check questions have plausible distractors (wrong answers) that test understanding.
    • PASS: The practice task directly applies the primary learning objective.
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, rewrite vague objectives. Ensure examples are relevant to the audience's daily work. Strengthen knowledge checks to diagnose real confusion. The material must be self-contained and effective.
    `

    11. Policy Document Template

    When to Use: To draft clear, unambiguous, and enforceable company policies that minimize interpretation risk.
    `markdown ACT AS a compliance officer. Draft a policy document titled: "[POLICY NAME]".

    RALPH LOOP INITIATED. Complete these ATOMIC TASKS:

  • Purpose & Scope: Define the policy's goal in one sentence. Explicitly state who and what it applies to (and any exceptions).
  • Policy Statement: State the core rules or standards in clear, directive language (use "must," "shall," "is prohibited").
  • Procedure Outline: List the steps to comply with the policy. Assign responsibility for each step to a role.
  • Violation Definition: Describe what constitutes a violation with specific examples.
  • Enforcement & Review: State the consequences of violation and how/when the policy itself will be reviewed.
  • PASS/FAIL CRITERIA:

    • PASS: The Scope section leaves no ambiguity about who is covered.
    • PASS: Policy statements are written in mandatory language, not suggestions.
    • PASS: Procedures are actionable steps, not goals.
    • PASS: Violation examples are clear-cut and realistic.
    • PASS: The document is free of subjective terms like "reasonable," "appropriate" without definition.
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, eliminate ambiguity. Replace "should" with "must." Add explicit inclusions/exclusions to scope. Define any subjective terms. The policy must be enforceable as written.
    `

    12. Executive Summary Template

    When to Use: To condense a complex document (plan, report, proposal) into one page that drives decision-making by highlighting the essential "so what."
    `markdown ACT AS an executive. Write a one-page executive summary for the following document: [DOCUMENT TITLE/PURPOSE].

    RALPH LOOP INITIATED. Execute these ATOMIC TASKS:

  • Situation Concision: In two sentences, summarize the current situation or problem being addressed.
  • Proposal Core: State the core recommendation or conclusion in one bold sentence.
  • Key Evidence: Bullet the 3-5 most compelling data points, findings, or arguments that support the proposal.
  • Implications/Risks: Bullet the 2-3 most significant business implications (positive) and key risks/mitigations.
  • Ask/Next Step: State the specific decision or resource being requested from the reader.
  • PASS/FAIL CRITERIA:

    • PASS: The entire summary would fit on one standard page (approx. 300 words).
    • PASS: The core proposal is immediately clear within the first 3 sentences.
    • PASS: Every bulleted evidence point is a hard fact or logical argument from the full document.
    • PASS: The "Ask" is unambiguous (e.g., "Approve $50k budget," "Greenlight Phase 1").
    • FAIL: Any criterion is not met.
    ITERATION LOGIC: If FAIL, ruthlessly cut background and detail. Elevate the core proposal. Select only the most decisive evidence. Refine the "Ask" to be a binary decision. Iterate until the summary is dense with insight and directive.
    `

    # Advanced Ralph Prompt Techniques for Claude Code

    Introduction to Advanced Ralph Loops

    While basic Ralph prompts establish the foundation of atomic tasks with clear pass/fail criteria, advanced techniques transform Claude Code from a simple executor into a sophisticated autonomous system. These methods enable handling of complex, multi-stage projects with built-in quality control, adaptive workflows, and self-correction mechanisms.

    1. Chaining Multiple Ralph Prompts

    Chaining connects sequential Ralph prompts where the output of one becomes the input for another, creating sophisticated workflows.

    Basic Chain Pattern

    `markdown

    TASK CHAIN: Data Processing Pipeline

    PROMPT 1: Data Extraction

    Atomic Task: Extract all customer email addresses from the provided document Pass Criteria:
  • At least 15 email addresses found
  • All emails match standard email format ([email protected])
  • No duplicate emails in results
  • Output Format: JSON array of strings

    PROMPT 2: Data Validation

    Input: [OUTPUT FROM PROMPT 1] Atomic Task: Validate each email using regex and DNS MX record check Pass Criteria:
  • All emails pass regex validation
  • At least 80% have valid MX records
  • No disposable email domains present
  • Output Format: JSON with valid/invalid arrays

    PROMPT 3: Data Enrichment

    Input: [VALID EMAILS FROM PROMPT 2] Atomic Task: Append company names based on email domains Pass Criteria:
  • Company identified for at least 70% of emails
  • No incorrect company assignments (manually verifiable sample)
  • Output Format: JSON array with email, company, validation_status
    `

    Conditional Chaining Template

    `markdown

    CONDITIONAL CHAIN WORKFLOW

    STAGE 1: Initial Processing

    [Standard Ralph prompt here]

    CHAINING RULES:

    IF output meets criteria A → Proceed to Stage 2A IF output meets criteria B → Proceed to Stage 2B IF output fails → Execute Diagnostic Stage

    STAGE 2A: Path for High-Quality Data

    Trigger Condition: Data quality score > 90% Task: [Specialized processing]

    STAGE 2B: Path for Medium-Quality Data

    Trigger Condition: Data quality score 70-90% Task: [Cleaning then processing]

    DIAGNOSTIC STAGE: Failure Recovery

    Trigger Condition: Any stage fails Task: Identify failure root cause and suggest correction
    `

    2. Nested Loops for Complex Tasks

    Nested loops create hierarchical task structures where main tasks contain sub-loops, enabling complex project breakdown.

    Project Implementation Template

    `markdown

    MAIN LOOP: Website Redesign Project

    Overall Pass Criteria:
  • All pages pass WCAG 2.1 AA accessibility
  • Mobile performance score > 85
  • Cross-browser compatibility confirmed
  • SUB-LOOP 1: Homepage Implementation

    Atomic Task: Create homepage with hero, features, testimonials Pass Criteria: - Loads under 2 seconds on 3G - Lighthouse score > 90 - All images have alt text

    SUB-LOOP 1.1: Hero Section

    Atomic Task: Implement responsive hero with CTA Pass Criteria: - Above-fold content loads in <1s - CTA contrast ratio 4.5:1 minimum - Works on viewports 320px to 1920px

    SUB-LOOP 1.2: Features Grid

    Atomic Task: Create 3-column features section Pass Criteria: - Grid collapses to 1 column on mobile - Icons are SVG format - Section score 95+ on Lighthouse

    [Continues with more nested levels...] `

    Nested Loop Control Structure

    `markdown

    NESTED LOOP CONTROL MECHANISM

    for each MAIN_TASK in project: execute RALPH_LOOP(MAIN_TASK) if MAIN_TASK.passed: for each SUB_TASK in MAIN_TASK.subtasks: execute RALPH_LOOP(SUB_TASK) if SUB_TASK.passed: continue to next SUB_TASK else: execute DIAGNOSTIC(SUB_TASK) retry SUB_TASK (max 3 attempts) else: escalate to PROJECT_MANAGER_LOOP `

    3. Conditional Task Execution

    Conditional execution allows dynamic workflow paths based on intermediate results.

    Smart Content Generation Example

    `markdown

    CONDITIONAL CONTENT WORKFLOW

    PHASE 1: Topic Analysis

    Task: Analyze target keyword for intent and competition Output Metrics:
    • Intent type (informational/commercial/transactional)
    • Difficulty score (1-100)
    • Related entities

    CONDITIONAL PATHS:

    IF intent_type == "informational" AND difficulty < 50: EXECUTE: Comprehensive Guide Path TASK: Create 3000+ word ultimate guide CRITERIA: Covers 10+ subtopics, includes examples ELSE IF intent_type == "transactional" AND difficulty < 70: EXECUTE: Product Comparison Path TASK: Compare 3-5 leading solutions CRITERIA: Includes feature matrix, pricing, pros/cons ELSE IF difficulty >= 70: EXECUTE: Skyscraper Technique Path TASK: Analyze top 5 competitors, create better content CRITERIA: 50% more comprehensive, updated information ELSE: EXECUTE: Standard Article Path TASK: Write 1500-word focused article CRITERIA: Covers main topic thoroughly `

    Conditional Code Generation Template

    `markdown

    ADAPTIVE CODE GENERATION

    ANALYSIS STAGE:

    Analyze the problem statement and determine:
  • Complexity level (simple/moderate/complex)
  • Required error handling level
  • Performance requirements
  • GENERATION PATHS:

    SWITCH(complexity): CASE "simple": TASK: Generate straightforward implementation CRITERIA: <100 lines, basic error handling LIBRARIES: Standard library only CASE "moderate": TASK: Generate robust implementation CRITERIA: <300 lines, comprehensive error handling LIBRARIES: Add 1-2 common dependencies CASE "complex": TASK: Generate enterprise implementation CRITERIA: Modular architecture, full test suite LIBRARIES: Use established design patterns `

    4. Parallel Task Verification

    Parallel verification uses multiple validation approaches simultaneously for higher confidence.

    Multi-Metric Validation Template

    `markdown

    PARALLEL VERIFICATION SYSTEM

    TASK: Generate API response handler

    VERIFICATION CHANNELS (run simultaneously):

    CHANNEL 1: Unit Test Verification
    `python # Generated tests must pass def test_api_handler(): assert handle_response(mock_success) == expected_success assert handle_response(mock_error) == expected_error # 5+ test cases minimum ` CHANNEL 2: Static Analysis Verification `bash # Run these checks in parallel pylint api_handler.py --score 9.0+ mypy api_handler.py --no-error bandit api_handler.py --skip B101,B102 ` CHANNEL 3: Performance Verification `python # Performance criteria response_time < 100ms @ 1000rps memory_usage < 50MB peak no memory leaks in 100k iterations ` CHANNEL 4: Security Verification
    • No SQL injection vectors
    • Input validation on all parameters
    • Proper error messages (no info leakage)

    PASS REQUIREMENTS:

    All 4 channels must report PASS independently Any FAIL in any channel triggers full rework
    `

    Cross-Validation Pattern

    `markdown

    CROSS-VALIDATION WORKFLOW

    PRIMARY VALIDATION PATH:

    Method: Automated test suite Criteria: 100% test coverage of critical paths

    SECONDARY VALIDATION PATH:

    Method: Manual review checklist Criteria: Senior developer approves architecture

    TERTIARY VALIDATION PATH:

    Method: Peer review simulation Criteria: Generate and address 3+ potential critiques

    QUATERNARY VALIDATION PATH:

    Method: Production simulation Criteria: Handles 2x expected load without degradation

    DECISION MATRIX:

    Validation PathWeightRequired Score
    Automated Tests40%100%
    Manual Review30%90%
    Peer Review20%85%
    Load Test10%100%
    OVERALL PASS: Weighted score ≥ 92%
    `

    5. Escalation Paths for Stuck Tasks

    Escalation paths provide systematic approaches when tasks repeatedly fail.

    Tiered Escalation Framework

    `markdown

    ESCALATION FRAMEWORK

    TIER 1: Self-Correction (3 attempts)

    Triggers: Initial failure Actions:
  • Analyze error message
  • Adjust approach slightly
  • Retry with variations
  • Timeout: 5 minutes per attempt

    TIER 2: Alternative Approach (2 attempts)

    Triggers: Tier 1 exhausted Actions:
  • Try fundamentally different method
  • Break task into smaller subtasks
  • Consult knowledge base for patterns
  • Timeout: 10 minutes per attempt

    TIER 3: Human-Analog Simulation

    Triggers: Tier 2 exhausted Actions:
  • "Think aloud" problem analysis
  • Research similar solved problems
  • Generate multiple solutions, test best
  • Timeout: 15 minutes

    TIER 4: External Resource Consultation

    Triggers: Tier 3 exhausted Actions:
  • Search documentation (simulated)
  • Consult "experienced developer" persona
  • Review GitHub issues for similar problems
  • Timeout: 20 minutes

    TIER 5: Task Decomposition & Delegation

    Triggers: Tier 4 exhausted Actions:
  • Break into 5+ micro-tasks
  • Solve easiest parts first
  • Recombine partial solutions
  • Timeout: 30 minutes

    TIER 6: Escalate to Human

    Triggers: All tiers exhausted OR 60+ minutes spent Actions:
  • Document all attempts
  • Provide detailed failure analysis
  • Suggest 3 possible human interventions
  • `

    Stuck Task Diagnostic Template

    `markdown

    STUCK TASK DIAGNOSTIC PROTOCOL

    When task fails 2+ times, execute:

    STEP 1: Root Cause Analysis

    `python def diagnose_failure(task, attempts): issues = [] for attempt in attempts: issues.append({ 'failure_point': identify_failure(attempt), 'error_type': categorize_error(attempt), 'partial_success': measure_partial(attempt) }) return analyze_patterns(issues) `

    STEP 2: Solution Generation

    Based on diagnosis, generate 3 alternative approaches:
  • Conservative Approach: Simplest working solution
  • Robust Approach: Handles edge cases thoroughly
  • Innovative Approach: Novel method that might work
  • STEP 3: Approach Selection Matrix

    ApproachSuccess ProbabilityTime EstimateComplexity
    ConservativeHigh (80%)ShortLow
    RobustMedium (60%)MediumHigh
    InnovativeLow (30%)LongMedium

    STEP 4: Implement Selected Approach

    With additional monitoring and checkpoints
    `

    6. Quality Threshold Escalation

    Quality thresholds that increase with iterations ensure continuous improvement.

    Progressive Quality Framework

    `markdown

    PROGRESSIVE QUALITY ESCALATION

    ITERATION 1: Minimum Viable Quality

    Threshold: Functional correctness only Criteria:
    • Basic functionality works
    • No critical errors
    • Meets minimum requirements

    ITERATION 2: Good Quality

    Trigger: Iteration 1 passes Threshold: Add robustness and edge cases Criteria:
    • Handles common edge cases
    • Reasonable error messages
    • Basic performance acceptable

    ITERATION 3: High Quality

    Trigger: Iteration 2 passes Threshold: Production-ready standards Criteria:
    • Comprehensive error handling
    • Performance optimized
    • Security reviewed
    • Documentation complete

    ITERATION 4: Excellent Quality

    Trigger: Iteration 3 passes (optional) Threshold: Beyond requirements Criteria:
    • Exceptional performance
    • Elegant, maintainable code
    • Anticipates future needs
    • Includes monitoring hooks
    `

    Quality Escalation in Content Creation

    `markdown

    CONTENT QUALITY ESCALATION LOOP

    BASELINE (Must Pass):

    • 1000+ words
    • No grammatical errors
    • Covers main topic
    • Basic structure (H2/H3)

    LEVEL 1 ESCALATION (If time < 50% used):

    • Add data/statistics with sources
    • Include practical examples
    • Improve readability (Flesch score > 60)
    • Add internal linking suggestions

    LEVEL 2 ESCALATION (If time < 75% used):

    • Include expert quotes or research
    • Add multimedia suggestions (images/diagrams)
    • Create actionable takeaways
    • Optimize for target keywords

    LEVEL 3 ESCALATION (If time remains):

    • Generate FAQ section
    • Create summary infographic concept
    • Suggest related content topics
    • Add interactive element ideas
    `

    7. Multi-Perspective Evaluation

    Evaluating outputs from multiple perspectives reduces bias and increases robustness.

    Perspective Evaluation Template

    `markdown

    MULTI-PERSPECTIVE EVALUATION FRAMEWORK

    TASK: Write privacy policy for SaaS application

    Evaluator Persona: GDPR Compliance Officer Check Criteria:
    • Includes all required GDPR clauses
    • Proper cookie consent description
    • Data processing purposes clearly stated
    • User rights section comprehensive

    PERSPECTIVE 2: User Experience

    Evaluator Persona: Concerned End User Check Criteria:
    • Language clear and understandable (not legalese)
    • Easy to find important information
    • No hidden concerning clauses
    • Contact information prominent

    PERSPECTIVE 3: Business Protection

    Evaluator Persona: Company Lawyer Check Criteria:
    • Limits liability appropriately
    • Protects intellectual property
    • Allows necessary data processing
    • Covers all company services

    PERSPECTIVE 4: Technical Accuracy

    Evaluator Persona: Technical Lead Check Criteria:
    • Accurately describes data flows
    • Matches actual system architecture
    • Covers all APIs and integrations
    • Reflects actual retention periods

    CONSENSUS REQUIREMENT:

    All 4 perspectives must rate output ≥ 8/10 Any perspective ≤ 6/10 triggers revision
    `

    Code Review from Multiple Perspectives

    `markdown

    MULTI-PERSPECTIVE CODE REVIEW

    PERSPECTIVE A: Security Auditor

    `python # Security checks SECURITY_CHECKS = [ "No hardcoded credentials", "Input validation on all endpoints", "SQL injection prevention", "XSS protection implemented", "CSRF tokens where needed", "Secure headers configured" ] `

    PERSPECTIVE B: Performance Engineer

    `python # Performance checks PERFORMANCE_CHECKS = [ "N+1 query problems avoided", "Database indexes properly used", "Caching strategy implemented", "Response times < 200ms p95", "Memory usage stable under load" ] `

    PERSPECTIVE C: DevOps Specialist

    `python # Operations checks OPS_CHECKS = [ "Logging at appropriate levels", "Metrics exposed for monitoring", "Configuration externalized", "Health check endpoint present", "Graceful shutdown handling" ] `

    PERSPECTIVE D: Maintenance Developer

    `python # Maintainability checks MAINTENANCE_CHECKS = [ "Code complexity < 10 per function", "Documentation covers public APIs", "Test coverage > 80%", "Clear error messages", "Consistent coding style" ] `

    INTEGRATION:

    Generate 4 separate reports, then synthesize into unified action items
    `

    8. Self-Correction Patterns

    Self-correction patterns enable Claude to identify and fix its own mistakes.

    Reflection-Based Correction

    `markdown

    REFLECTION AND SELF-CORRECTION LOOP

    PHASE 1: Initial Attempt

    `python def solve_problem(requirements): # First attempt implementation return initial_solution `

    PHASE 2: Critical Reflection

    Reflection Questions:
  • What assumptions did I make that might be wrong?
  • Are there edge cases I haven't considered?
  • Is there a simpler approach I overlooked?
  • What would an expert critique about this solution?
  • PHASE 3: Gap Analysis

    `python def analyze_gaps(solution, requirements): gaps = [] for req in requirements: if not requirement_met(req, solution): gaps.append({ 'requirement': req, 'gap': identify_gap(req, solution), 'severity': assess_severity(req) }) return prioritized_gaps(gaps) `

    PHASE 4: Targeted Correction

    For each high-severity gap:
  • Understand why gap exists
  • Design specific fix
  • Implement and test fix
  • Verify gap is closed
  • PHASE 5: Holistic Review

    After all gaps addressed:
  • Review entire solution for consistency
  • Check for new issues introduced by fixes
  • Verify all original requirements still met
  • `

    Iterative Refinement Pattern

    `markdown

    ITERATIVE REFINEMENT PROCESS

    ITERATION 0: Naive Implementation

    Goal: Make it work somehow Acceptance: Basic functionality

    ITERATION 1: Make it correct

    Focus: Fix all bugs and edge cases Techniques: Add tests, handle errors

    ITERATION 2: Make it efficient

    Focus: Performance optimization Techniques: Profile, benchmark, optimize hotspots

    ITERATION 3: Make it maintainable

    Focus: Code quality and structure Techniques: Refactor, document, improve naming

    ITERATION 4: Make it robust

    Focus: Production readiness Techniques: Add monitoring, logging, metrics

    ITERATION 5: Make it elegant

    Focus: Simplicity and clarity Techniques: Remove duplication, simplify logic

    CONTINUOUS IMPROVEMENT:

    After each iteration, ask: "What's the next biggest weakness?"
    `

    Meta-Correction Template

    `markdown

    META-CORRECTION FRAMEWORK

    STEP 1: Output Analysis

    Analyze current output against criteria:
    • Which criteria are fully met?
    • Which are partially met?
    • Which are completely missed?

    STEP 2: Pattern Recognition

    Look for patterns in failures:
    • Are failures related (e.g., all timing issues)?
    • Is there a root cause affecting multiple criteria?
    • Are failures in dependent criteria?

    STEP 3: Correction Strategy Selection

    Based on failure patterns: PATTERN A: Multiple unrelated failures Strategy: Break task into smaller pieces, solve independently PATTERN B: Single root cause affecting many criteria Strategy: Address root cause first, then reassess PATTERN C: Criteria conflicts Strategy: Reconcile requirements, seek clarification PATTERN D: Missing knowledge/skills Strategy: Research, learn, then reattempt

    STEP 4: Implement Correction

    Execute selected strategy with additional verification

    STEP 5: Learn and Adapt

    Update approach based on what worked/didn't work
    `

    Implementation Checklist for Advanced Ralph Prompts

    `markdown

    ADVANCED RALPH PROMPT CHECKLIST

    Design Phase:

    • [ ] Task complexity justifies advanced techniques
    • [ ] Appropriate technique(s) selected for problem type
    • [ ] Clear escalation paths defined
    • [ ] Quality thresholds established
    • [ ] Multiple verification methods planned

    Implementation Phase:

    • [ ] Chaining logic clearly defined
    • [ ] Conditional paths cover all scenarios
    • [ ] Parallel verification independent
    • [ ] Self-correction mechanisms built in
    • [ ] Timeouts and limits specified

    Testing Phase:

    • [ ] Test with edge cases
    • [ ] Verify escalation triggers appropriately
    • [ ] Confirm quality thresholds achievable
    • [ ] Test failure recovery
    • [ ] Validate multi-perspective evaluation

    Documentation:

    • [ ] Technique selection rationale documented
    • [ ] All paths and conditions explained
    • [ ] Expected behaviors described
    • [ ] Troubleshooting guide included
    `

    These advanced Ralph prompt techniques transform Claude Code from a simple task executor into a sophisticated autonomous system capable of handling complex, multi-stage projects with built-in quality control and adaptive workflows. By implementing these patterns, you can tackle increasingly complex challenges while maintaining the reliability and verifiability that make the Ralph methodology so powerful.

    ---

    # Common Ralph Prompt Mistakes and How to Avoid Them

    Introduction: The Cost of Poor Ralph Prompt Design

    Well-designed Ralph prompts can dramatically increase Claude Code's effectiveness, but common mistakes undermine the entire methodology. These errors lead to infinite loops, incorrect outputs, or Claude giving up entirely. Understanding and avoiding these pitfalls is crucial for successful Ralph loop implementation.

    1. Criteria Too Vague

    Vague criteria make verification impossible, defeating the entire purpose of Ralph loops.

    ❌ Bad Example: Vague Criteria

    `markdown

    TASK: Write a good article about SEO

    Pass Criteria:
    • The article should be high quality
    • It should be engaging to read
    • Cover SEO thoroughly
    • Be helpful for readers
    ` Problems:
    • "High quality" isn't measurable
    • "Engaging" is subjective
    • "Thoroughly" has no definition
    • "Helpful" can't be verified

    ✅ Good Example: Specific, Measurable Criteria

    `markdown

    TASK: Write a comprehensive SEO article targeting "local SEO strategies"

    Pass Criteria:
  • Length & Structure: 1500-2000 words with H2/H3 headings
  • Keyword Coverage: Include "local SEO" 8-12 times naturally
  • Practical Elements: At least 5 actionable strategies with examples
  • Data Support: Include 3+ statistics with verifiable sources
  • Readability: Flesch Reading Ease score > 60 (test with tool)
  • Completeness: Cover Google Business Profile, reviews, local citations, and local content
  • Formatting: Include at least one bulleted list and one numbered list
  • Originality: Pass Copyscape check (no plagiarism)
  • ` Improvements:
    • Quantifiable metrics (word count, keyword density)
    • Testable conditions (Flesch score, plagiarism check)
    • Specific coverage requirements
    • Actionable verification steps

    Template for Specific Criteria:

    `markdown

    CRITERIA SPECIFICATION TEMPLATE

    For each criterion, specify:

  • Metric: What exactly is measured?
  • Target: What value indicates success?
  • Measurement Method: How will it be tested?
  • Acceptable Range: Minimum/maximum values?
  • Example transformation: Vague: "Fast performance" → Specific: "Page load time < 2 seconds on 3G connection, tested using WebPageTest" `

    2. Tasks Not Atomic Enough

    Non-atomic tasks combine multiple independent verifications, making failure diagnosis impossible.

    ❌ Bad Example: Compound Task

    `markdown

    TASK: Build a complete user authentication system

    Pass Criteria:
  • Users can register with email/password
  • Users can log in and receive JWT token
  • Password reset functionality works
  • Email verification implemented
  • Social login with Google and Facebook
  • Rate limiting prevents brute force attacks
  • All endpoints have proper validation
  • Database schema is optimized
  • ` Problems:
    • 8 independent components in one task
    • Failure could be in any component
    • No way to know which part failed
    • Too complex for single verification

    ✅ Good Example: Atomic Tasks

    `markdown

    TASK CHAIN: User Authentication System

    TASK 1: User Registration Endpoint

    Pass Criteria:
  • POST /api/register accepts email, password, name
  • Returns 201 Created on success
  • Password hashed with bcrypt (not plaintext)
  • Email format validated
  • Duplicate email prevented
  • TASK 2: Login and JWT Issuance

    Pass Criteria:
  • POST /api/login validates credentials
  • Returns JWT token on success (expires in 24h)
  • Returns 401 on invalid credentials
  • Token contains user ID and email in payload
  • TASK 3: Password Reset Flow

    Pass Criteria: [Continues with similarly atomic tasks...]
    ` Improvements:
    • Each task has single responsibility
    • Clear pass/fail for each component
    • Easy to identify which task failed
    • Can parallelize development

    Atomicity Checklist:

    `markdown

    ATOMIC TASK CHECKLIST

    A task is atomic if:

    • [ ] It has a single primary objective
    • [ ] Success/failure is binary (not partial)
    • [ ] It can be verified independently
    • [ ] It doesn't depend on other tasks' implementation details
    • [ ] Its scope can be described in one sentence
    If you're using "and" in the task description, it's probably not atomic. "Build login AND registration" → Split into two tasks.
    `

    3. Missing Context

    Insufficient context leads to incorrect assumptions and irrelevant solutions.

    ❌ Bad Example: Context-Free Prompt

    `markdown

    TASK: Create a database schema

    Pass Criteria:
  • Tables properly normalized
  • Appropriate indexes defined
  • Foreign keys established
  • Data types appropriate
  • ` Problems:
    • What kind of data? Users? Products? Analytics?
    • What's the scale? 100 users or 10 million?
    • What queries need to be fast?
    • What database technology?

    ✅ Good Example: Context-Rich Prompt

    `markdown

    TASK: Create PostgreSQL schema for e-commerce platform

    CONTEXT:
    • Platform: Medium-sized e-commerce (10k-100k products)
    • Expected traffic: 1000 daily orders peak
    • Team: Small team, prefer simplicity over premature optimization
    • Future needs: january expand to multiple vendors
    • Existing systems: Will integrate with Stripe (payment) and SendGrid (email)
    SPECIFIC REQUIREMENTS:
  • Products have variants (size, color)
  • Inventory needs tracking
  • Orders include shipping addresses
  • Users can have multiple shipping addresses
  • Need sales reporting by day/product/category
  • TECHNICAL CONSTRAINTS:
    • Use PostgreSQL 14+
    • Prefer UUID over serial for IDs
    • Include created_at/updated_at timestamps
    • Soft deletes where appropriate
    Pass Criteria: [Specific criteria based on this context...]
    ` Improvements:
    • Clear business context
    • Technical constraints specified
    • Scale considerations included
    • Integration requirements stated

    Context Template:

    `markdown

    CONTEXT TEMPLATE

    Business Context:

    • Purpose: [What problem are we solving?]
    • Users: [Who will use this?]
    • Scale: [Expected volume/growth]
    • Constraints: [Budget, timeline, regulations]

    Technical Context:

    • Environment: [Existing systems, tech stack]
    • Constraints: [Performance requirements, compatibility]
    • Standards: [Coding standards, architectural patterns]

    Domain Context:

    • Terminology: [Key terms and definitions]
    • Rules: [Business rules, validation logic]
    • Examples: [Sample inputs/outputs]
    `

    4. No Iteration Instructions

    Without clear iteration instructions, Claude doesn't know how to improve failed attempts.

    ❌ Bad Example: Missing Iteration Guidance

    `markdown

    TASK: Fix the bug in this code

    Code: [Buggy code here] Pass Criteria: All tests pass
    ` Problems:
    • What to do if tests fail?
    • How many times to try?
    • Should different approaches be attempted?
    • How to diagnose the problem?

    ✅ Good Example: Explicit Iteration Protocol

    `markdown

    TASK: Debug and fix the authentication bug

    CODE: [Buggy authentication code] TEST SUITE: [Tests that currently fail]

    ITERATION PROTOCOL:

    ATTEMPT 1: Analyze error messages and fix obvious issues
    • Time limit: 5 minutes
    • Success: Tests pass
    • Failure: Proceed to Attempt 2
    ATTEMPT 2: Add debug logging, understand flow, then fix
    • Add console.log at key decision points
    • Trace the data flow
    • Identify where logic diverges from expected
    • Time limit: 10 minutes
    ATTEMPT 3: If still failing, research common authentication bugs
    • Check: Token expiration handling
    • Check: Middleware order
    • Check: Secret key configuration
    • Check: CORS issues
    • Time limit: 15 minutes
    ATTEMPT 4: Rewrite critical section from scratch
    • Implement known working pattern
    • Compare with original to identify difference
    • Time limit: 20 minutes
    ESCALATION: If all attempts fail, provide:
  • Detailed analysis of what was tried
  • Hypothesis about root cause
  • Suggestions for human intervention
  • ` Improvements:
    • Clear step-by-step approach
    • Time limits prevent infinite loops
    • Escalation path for stuck tasks
    • Progressive debugging strategies

    Iteration Template:

    `markdown

    ITERATION PROTOCOL TEMPLATE

    MAX ATTEMPTS: [3-5 typically]

    ATTEMPT 1: Quick Fix

    • Strategy: [Obvious fixes based on error messages]
    • Time limit: [X minutes]
    • Success criteria: [Specific condition]
    • Next on failure: [Move to Attempt 2]

    ATTEMPT 2: Systematic Analysis

    • Strategy: [Add debugging, trace execution]
    • Time limit: [X minutes]
    • Success criteria: [Specific condition]
    • Next on failure: [Move to Attempt 3]

    ATTEMPT 3: Alternative Approach

    • Strategy: [Try different algorithm/architecture]
    • Time limit: [X minutes]
    • Success criteria: [Specific condition]
    • Next on failure: [Escalate or provide analysis]

    FINAL OUTPUT REQUIREMENTS:

    If all attempts fail, provide:
  • What was tried
  • What was learned
  • Recommended next steps
  • `

    5. Success Criteria That Can't Be Tested

    Untestable criteria create verification paradoxes where success cannot be objectively determined.

    ❌ Bad Example: Untestable Criteria

    `markdown

    TASK: Write marketing copy for new product

    Pass Criteria:
  • Copy is compelling and persuasive
  • Readers feel excited about the product
  • The tone matches our brand voice
  • It will convert well
  • ` Problems:
    • "Compelling" is subjective
    • "Feel excited" can't be measured
    • "Brand voice" isn't defined
    • "Convert well" requires actual testing

    ✅ Good Example: Testable Criteria

    `markdown

    TASK: Write landing page copy for ProjectX

    TESTABLE CRITERIA:
  • Length & Structure: 300-400 words with clear value proposition above fold
  • Keyword Inclusion: Include "project management" 3-5 times naturally
  • CTA Clarity: Primary CTA button uses action-oriented text ("Start Free Trial")
  • Benefit Focus: Lead with benefits (save time, reduce errors) not features
  • Social Proof: Include space for testimonial placeholder [Testimonial here]
  • Scannability: Use bullet points for key features
  • Tone Guidelines:
  • - Use active voice (90%+ of sentences) - Avoid jargon (Flesch-Kincaid grade level < 10) - Positive language (no "don't", "can't", "won't")
  • Technical Checks:
  • - No spelling/grammar errors (run through Grammarly) - All sentences under 25 words - Paragraphs under 4 sentences VERIFICATION METHOD: For each criterion, specify how Claude can test it:
    • Word count: Count words
    • Keyword density: Search for terms
    • Active voice: Check for passive constructions
    • Readability: Calculate Flesch-Kincaid
    ` Improvements:
    • Objective metrics instead of subjective feelings
    • Clear verification methods for each criterion
    • Quantitative where possible, specific where qualitative
    • Automated checking where feasible

    Testability Transformation Guide:

    `markdown

    MAKING CRITERIA TESTABLE

    Untestable → Testable ---------------→ "Good performance" → "Loads in < 2s on 3G, tested with Lighthouse" "User-friendly" → "Task completion rate > 90% with first-time users" "Secure" → "Passes OWASP Top 10 checklist, no critical vulnerabilities" "High quality" → "Code coverage > 80%, cyclomatic complexity < 10 per function" "Engaging" → "Average time on page > 2 minutes, bounce rate < 40%"

    TESTABILITY CHECKLIST:

    • [ ] Can the criterion be measured objectively?
    • [ ] Is there a tool or method to verify it?
    • [ ] Can it be tested without human judgment?
    • [ ] Is the pass/fail threshold clear?
    `

    6. Too Many Criteria (Overwhelming)

    Excessive criteria create cognitive overload and make verification impractical.

    ❌ Bad Example: Criteria Overload

    `markdown

    TASK: Create a contact form

    Pass Criteria (25 items):
  • Field for first name (required)
  • Field for last name (required)
  • Field for email (required, validated)
  • Field for phone (optional, validated if provided)
  • Dropdown for country (required)
  • Textarea for message (required, min 10 chars)
  • GDPR checkbox (required)
  • Marketing opt-in checkbox (optional)
  • Submit button
  • Form has client-side validation
  • Form has server-side validation
  • Success message shown on submit
  • Error messages clear and helpful
  • Form accessible (ARIA labels)
  • Mobile responsive
  • Loads quickly (< 1s)
  • No JavaScript errors
  • Works without JavaScript
  • CSRF protection
  • Rate limiting
  • Email sent to admin on submission
  • Copy sent to user on submission
  • Data saved to database
  • Form styling matches site theme
  • Tab order logical
  • ` Problems:
    • 25 criteria is overwhelming
    • Mixes functional, UX, security, performance
    • No prioritization
    • Verification would take forever

    ✅ Good Example: Prioritized, Grouped Criteria

    `markdown

    TASK: Create a contact form

    CRITICAL CRITERIA (Must Pass):

  • Core Functionality:
  • - Collects: name, email, message (all required) - Validates email format - Stores submission in database - Sends confirmation email to user
  • Security:
  • - CSRF token protection - Basic rate limiting (5 submissions/hour/IP) - SQL injection prevented

    IMPORTANT CRITERIA (Should Pass):

  • User Experience:
  • - Clear validation messages - Mobile responsive layout - Accessible (keyboard navigable, ARIA labels)
  • Performance:
  • - Page loads < 2s - Form submission < 3s

    NICE-TO-HAVE (Bonus if time):

  • Enhanced Features:
  • - Phone number validation - Country dropdown - Marketing opt-in - Progressive enhancement (works without JS)

    VERIFICATION PROTOCOL:

  • First verify all CRITICAL criteria
  • If time permits, verify IMPORTANT criteria
  • Only attempt NICE-TO-HAVE if others pass
  • ` Improvements:
    • Prioritization (critical/important/nice-to-have)
    • Logical grouping by concern
    • Progressive verification
    • Realistic scope

    Criteria Prioritization Template:

    `markdown

    PRIORITIZED CRITERIA TEMPLATE

    TIER 1: ESSENTIAL (Blocking)

    [3-5 criteria maximum] These must pass for the solution to be usable at all.

    TIER 2: IMPORTANT (Quality)

    [5-8 criteria] These significantly affect user experience or maintainability.

    ###

    # Frequently Asked Questions (20 Questions)

    1. What exactly is a ralph prompt?

    A ralph prompt is a structured prompt designed to initiate a "Ralph Loop"—a methodology where Claude Code (or another capable AI) breaks complex work into atomic tasks, defines explicit pass/fail criteria for each, and iterates autonomously until all criteria are met. It's not just a question or instruction; it's a self-contained workflow specification. Example:
    ` You are executing a Ralph Loop. TASK: Write a Python function to validate email addresses. CRITERIA:
  • Function must be named validate_email.
  • It must return True for valid emails and False otherwise.
  • It must correctly validate the basic format: [email protected].
  • It must reject strings with spaces.
  • You must write a test suite with 5 test cases that passes.
  • ACTION: Write the code, then execute your tests. If all tests pass, present the final code. If any fail, diagnose, fix, and retest.
    `

    2. How is it different from regular prompting?

    Regular prompts are typically one-off requests ("Write a function to validate emails"). A ralph prompt adds a self-verification and iteration layer. The AI is instructed to define or follow strict criteria, test its output against them, and not proceed until everything passes. It shifts the AI from a "suggestor" to an "executor."

    3. Do ralph prompts work with ChatGPT?

    The core concept of defining criteria and iterating can be applied to advanced models like GPT-4, especially with Code Interpreter. However, the full autonomous "Ralph Loop" is specifically engineered for Claude Code, which can execute code, run tests, and diagnose failures in a single continuous session without human intervention.

    4. How many iterations should I allow?

    There's no fixed limit, but you should design atomic tasks to be small enough that they succeed within 3-5 iterations. If a task consistently fails beyond that, the task itself january need to be broken down further, or the criteria january be ambiguous or impossible.

    5. Do ralph prompts use more tokens?

    Yes, significantly. Because the AI generates code, runs tests, analyzes failures, and rewrites code, the conversation history is much longer. This is a trade-off for achieving verifiably correct outputs without manual intervention. Budget for longer, more expensive sessions when using this method.

    6. Can I use ralph prompts for creative work?

    Yes, but you must define objective, testable criteria. Instead of "write a catchy slogan," you might specify:
    • CRITERIA: The slogan must be under 10 words. It must include the word "innovate." It must pass a readability score of >70.
    The AI can then use tools to check word count, keyword inclusion, and score readability.

    7. How do I know if my criteria are good?

    Good criteria are SMART: Specific, Measurable, Achievable, Relevant, and Testable by the AI itself. Avoid subjective language ("elegant code"). Use objective checks ("function must have a time complexity of O(n log n)" or "must pass all provided unit tests").

    8. What if the AI keeps failing?

  • Break the task down further. The atomic task might still be too complex.
  • Clarify your criteria. Ambiguity causes loops.
  • Provide a starting example or template.
  • Check for impossible criteria (e.g., conflicting requirements).
  • 9. Should every prompt be a ralph prompt?

    No. Use ralph prompts for complex, high-stakes, or verifiable outputs where correctness is paramount. For brainstorming, simple Q&A, or exploratory tasks, traditional prompting is faster and more efficient.

    10. How do I handle subjective criteria?

    Quantify them. Convert "make the UI look modern" into criteria like:
    • Use a CSS color palette from Coolors.co.
    • Ensure button contrast ratio meets WCAG AA standard (>= 4.5:1).
    • Use a sans-serif font stack.
    The AI can use tools to validate color contrast and identify font families.

    11. What's the minimum task size?

    An atomic task should be the smallest unit of work that produces a verifiable and useful result. This could be writing a single function with tests, formatting a dataset according to three specific rules, or correcting all broken links in one HTML file.

    12. Can ralph prompts handle uncertainty?

    They are designed for deterministic tasks. For uncertain domains (e.g., market predictions), criteria must be based on process or format, not the unpredictable outcome. Example:
    CRITERIA: The analysis must include a Monte Carlo simulation with 10,000 iterations. The output must be a table with columns 'Scenario', 'Probability', and 'Impact'.`

    13. How do I debug a failing ralph prompt?

    Examine the AI's internal dialogue in its output. It will often state why a test failed. Use this to:
    • Fix ambiguous criteria.
    • Provide more context.
    • Seed the prompt with a partial solution.
    Start with a simple, known-working example to validate your prompt structure.

    14. What about tasks with no clear pass/fail?

    All tasks can have pass/fail if framed correctly. For a task like "draft an email," criteria could be:
    • Contains a clear subject line.
    • Includes a call-to-action.
    • Is under 150 words.
    • Has no spelling errors (run through spellcheck).
    The AI can verify word count and spelling autonomously.

    15. Can I use ralph prompts for conversations?

    The Ralph Loop is for task execution, not open-ended dialogue. However, you can manage a complex conversational goal with criteria (e.g., "extract all action items from this meeting transcript; criteria: list must be in a markdown table, each action must have an owner and deadline").

    16. How do I measure ralph prompt effectiveness?

    Track:
    • Success Rate: Percentage of tasks that complete with all criteria met.
    • Iteration Count: Average loops needed per task.
    • Time/Tokens to Completion: Efficiency.
    • Output Quality: Reduced need for human correction post-delivery.

    17. What's the learning curve?

    For a technical user familiar with concepts like unit testing and specifications, it's low. The main shift is learning to think in terms of atomic, verifiable tasks. For others, start by modifying existing templates from Ralphable.

    18. Can teams share ralph prompts?

    Absolutely. They are markdown files. Teams should build a library of vetted ralph prompts (skills) for common tasks like "data cleaning pipeline," "API endpoint generation," or "report formatting," ensuring consistency and quality across projects.

    19. How does Ralphable help with ralph prompts?

    Ralphable is a repository and generator for skills—pre-built, community-vetted ralph prompts packaged as markdown files. Instead of crafting prompts from scratch, you can browse, customize, and deploy skills for hundreds of specific tasks, dramatically reducing setup time.

    20. What's the future of ralph prompting?

    It points toward a future of autonomous AI agents. As models improve, the ability to specify a goal with rigorous criteria and let the AI handle the entire execution loop will become standard for software development, data analysis, and content generation. Ralph prompts are the blueprint for reliable AI collaboration.

    ---

    # Conclusion

    The advent of powerful AI like Claude Code has moved us beyond simple chat interfaces into the realm of true AI collaboration. The Ralph Loop methodology and the ralph prompts that power it represent a fundamental shift in how we interact with artificial intelligence. This isn't about asking for a suggestion; it's about issuing a verifiable command and receiving a guaranteed result.

    The core power of a ralph prompt lies in its marriage of specification and autonomous execution. By forcing the breakdown of work into atomic tasks and defining unambiguous pass/fail criteria, we overcome the "good enough" ambiguity that plagues traditional AI outputs. The AI becomes a relentless quality engineer, testing and iterating on its own work until every box is checked. This is invaluable for coding, data transformation, document generation, and any task where errors are costly.

    Key takeaways for your workflow:
  • Think Atomically: Before prompting, break your project into the smallest verifiable units of work.
  • Define Objective Criteria: If you can't write a test for it, the AI can't validate it. Use numbers, rules, and existing test suites.
  • Embrace the Loop: Let the AI run. The token cost of iteration is an investment in a correct, final product.
  • Start with Templates: Don't build from a blank page. Leverage the growing ecosystem of pre-built skills.
  • Getting started is straightforward. Identify a repetitive, rule-based task in your daily work. Write down the exact rules for success. Then, structure them into a ralph prompt using the TASK/CRITERIA/ACTION format and let Claude Code run. You'll quickly see the difference between an AI that tries and an AI that delivers.

    To jumpstart this process, visit Ralphable. Our platform is dedicated to generating, sharing, and refining these powerful skills. Browse the skill library for your domain, customize a template in seconds, and deploy a more capable, reliable AI assistant today. The future of work isn't just human or AI—it's human defining the mission and AI executing it with precision. Ralph prompts are your tool to command that future.

    --- Last updated: january 2026

    Ready to try structured prompts?

    Generate a skill that makes Claude iterate until your output actually hits the bar. Free to start.

    Written by Ralphable Team

    Building tools for better AI outputs