Most "AI prompt" content on the internet is written for people who want to be more creative. Venture operators don't need creative assistance — they need reliable outputs that integrate into operational workflows. The prompts below are the ones we've kept after testing hundreds across 60+ automation deployments. Each one does a specific job. Each one has a designed output format. None of them start with "Act as an expert."
By Diosh Lequiron, PhD, MBA, CSM — President & CEO, HavenWizards 88 Ventures OPC Last updated: May 9, 2026
Why Prompt Quality Matters More Than Model Choice
The difference between a useful AI output and a useless one is rarely the model. It is the prompt. We run automations on the Anthropic API (Claude), OpenAI API (GPT-4o), and for some tasks Google's Gemini. The same prompt produces consistently different quality outputs depending on three factors: specificity of the task, specificity of the output format, and clarity about what "correct" looks like.
When we first deployed AI into our n8n workflows at Bayanihan Harvest, we used generic prompts — "summarize this order," "classify this inquiry." The outputs were technically correct but operationally useless. A "summary" that includes everything is no summary at all. A classification with no confidence score doesn't tell our operations team whether to trust it or verify it manually.
The prompts below are the result of 18 months of refinement. We share the prompt template and the output format it's designed to produce.
The 15 Prompts
1 — Customer Inquiry Classification
Use case: Incoming inquiries via contact form or email need to be routed to the right team member before a human reads them.
Classify the following customer inquiry into EXACTLY ONE of these categories:
- PARTNER_INQUIRY (asking about partnership, collaboration, or equity)
- EDUCATION_INQUIRY (asking about courses, workshops, or learning programs)
- BUILD_POD_INQUIRY (asking about execution team or outsourcing services)
- PRESS_INQUIRY (media, podcast, interview request)
- GENERAL (anything else)
Provide your classification as a JSON object with two fields:
{
"category": "[CATEGORY]",
"confidence": "[HIGH/MEDIUM/LOW]",
"routing_note": "[One sentence explaining the classification]"
}
Inquiry text:
[INQUIRY_TEXT]
Why this works: The output format is machine-readable. Confidence level tells the n8n workflow whether to auto-route (HIGH) or flag for human review (MEDIUM/LOW).
2 — SOP Compliance Check
Use case: After a team member logs a completed task, this prompt checks whether the recorded steps match the SOP.
You are reviewing whether a task execution log follows the required SOP steps.
SOP STEPS (must all be completed in this order):
[SOP_STEPS_LIST]
EXECUTION LOG:
[LOG_ENTRY]
Return a JSON object:
{
"compliant": true/false,
"missing_steps": ["step X", "step Y"],
"out_of_order": true/false,
"notes": "One sentence observation if non-compliant, empty string if compliant"
}
Where we use this: AHA eCommerce order fulfillment logging. Every fulfillment log is checked automatically. Non-compliant logs trigger a Slack notification to the pod lead.
3 — Content Brief Generator
Use case: When a content calendar date arrives, generate a structured brief from a topic keyword.
Generate a content brief for the following topic: [TOPIC]
Target audience: Philippine startup founders and operators with 1–5 years of experience.
Content type: Practitioner playbook (not theory, not news).
Publication: HavenWizards 88 Insights.
Output as JSON:
{
"working_title": "string — 10 words max, keyword-first",
"angle": "string — the contrarian or counter-intuitive frame, 1 sentence",
"main_argument": "string — what the reader will walk away knowing",
"proof_sources_needed": ["list of proof types: metrics, case study, comparison, tool-specific, etc."],
"avoid": ["list of angles or framings that would make this generic"],
"h2_skeleton": ["H2 heading 1", "H2 heading 2", "H2 heading 3", "H2 heading 4"]
}
4 — Farmer Data Validation (Bayanihan Harvest)
Use case: Incoming farmer registration data has inconsistencies from SMS-based collection. This prompt standardizes the data before database insertion.
Validate and clean the following farmer registration data. Return ONLY the cleaned JSON object with no explanation.
Input data:
[RAW_DATA]
Required output schema:
{
"farmer_name": "Proper case, no honorifics",
"barangay": "string — standardize to official barangay name if recognizable",
"municipality": "string",
"province": "string",
"primary_crop": "string — standardize to: vegetable, fruit, grain, root_crop, other",
"secondary_crop": "string or null",
"harvest_frequency_months": number,
"preferred_payment": "gcash | bank_transfer | cash",
"data_quality": "CLEAN | NEEDS_REVIEW | INCOMPLETE",
"quality_notes": "string — what needs human review, empty if CLEAN"
}
5 — Email Response Draft (Inquiry Response)
Use case: First-response drafts for partner inquiries. Human reviews and edits before sending.
Draft a first response to the following inquiry on behalf of HavenWizards 88 Ventures OPC.
Brand voice rules:
- Direct, not effusive
- No "Thank you for reaching out" opener (it's filler)
- No "I hope this email finds you well" (never)
- Acknowledge the specific ask, not the general inquiry
- One clear next step as the close
- Maximum 150 words
Inquiry:
[INQUIRY_TEXT]
Sender name (if available): [NAME]
Output: The email body only. No subject line. No salutation instructions.
6 — Error Log Summarization
Use case: n8n captures error logs from all automation workflows. This prompt summarizes the day's errors into an actionable report.
Summarize the following automation error log for an operations team review.
Output format:
{
"total_errors": number,
"critical_errors": number,
"error_clusters": [
{"pattern": "description of error type", "count": number, "affected_workflows": ["workflow names"], "recommended_action": "string"}
],
"one_line_summary": "plain language summary for the operations team — not for engineers"
}
Error log:
[ERROR_LOG]
7 — Competitor Content Gap Analysis
Use case: Monthly content gap analysis. Feed in competitor article titles/URLs scraped from their blogs, get back gap opportunities.
You are a content strategist analyzing gaps between HavenWizards 88's published content and competitor content.
HavenWizards 88 published topics:
[OUR_TOPIC_LIST]
Competitor published topics:
[COMPETITOR_TOPIC_LIST]
HavenWizards 88 niche: Philippine venture studio and holding company operations, AI automation for PH startups, Build Pods (Filipino execution teams).
Identify topics that competitors are covering that we are not, WHERE we could produce a stronger, more specific version with real proof points.
Output as a ranked list of 5 opportunities:
[
{
"topic": "string",
"competitor_coverage": "brief description of how competitors cover it",
"our_differentiated_angle": "how we'd cover it with HW88 proof points",
"estimated_search_intent": "informational | commercial | comparison"
}
]
8 — Product Description (E-Commerce, AHA)
Use case: Generating product listing copy for AHA eCommerce from raw product data.
Write an e-commerce product description for the following product.
Format:
- First line: Product name + one differentiating claim (not "high quality")
- Second paragraph: 2-3 sentences. What it is. Who it's for. What makes it worth choosing.
- Key features: Bulleted list, 3-5 items, each starting with a benefit-noun (not "Our product has...")
- No superlatives without specifics ("fresh" must say from where; "fast" must say how fast)
- Maximum 120 words total
Product data:
[PRODUCT_DATA]
9 — Weekly Operations Report Generator
Use case: Aggregating metrics from Supabase into a human-readable weekly ops summary.
Generate a weekly operations summary from the following metrics data. Audience: venture founder who needs to understand what happened and what needs attention, not a full data dump.
Format:
## Week of [DATE]
**Headline:** [One sentence — the most important development, good or bad]
**Metrics vs. last week:**
[Table: Metric | This Week | Last Week | Change]
**Flags (needs attention):** [Bulleted list — only items that require a decision or action]
**On track:** [Bulleted list — things running normally, no action needed]
**Recommended focus for next week:** [2-3 priorities, not a comprehensive list]
Metrics data:
[METRICS_JSON]
10 — Customer Churn Risk Classification
Use case: Monthly subscription review for HW88 Education. Classify each subscriber's churn risk based on engagement data.
Classify the churn risk of the following subscriber based on their engagement data.
Categories: HIGH_RISK | MEDIUM_RISK | LOW_RISK | ENGAGED
Output JSON:
{
"subscriber_id": "[ID]",
"churn_risk": "[CATEGORY]",
"primary_signal": "one sentence — the main behavioral signal driving this classification",
"recommended_action": "none | win-back email | personal outreach | upsell opportunity"
}
Engagement data:
[ENGAGEMENT_DATA]
11 — SOP First Draft from Process Description
Use case: Pod leads describe a process in plain language; this prompt converts it into a structured SOP draft.
Convert the following process description into a structured SOP using this exact format:
SOP: [Title — trigger-based, not department-based]
Version: 1.0 | Date: [today]
WHO USES THIS: [Role from the description]
SYSTEMS NEEDED: [Extract from description]
WHEN TO USE: [Extract the trigger condition]
STEPS: (maximum 7, each under 20 words, verb-first)
1.
2.
...
EXCEPTION PATH A: [Most common exception case]
A1.
A2.
A3. Escalate to [role] with: [what information to collect first]
RECORD: [How to log this execution]
Process description:
[PROCESS_DESCRIPTION]
If the process requires more than 7 steps, output TWO SOPs with a clear handoff between them.
12 — Partnership Fit Pre-Screen
Use case: When a partnership inquiry arrives, pre-screen before routing to founder.
Evaluate the following partnership inquiry against HavenWizards 88's partnership criteria.
HW88 partnership criteria:
- The partner has an existing business or funded venture (not a startup idea)
- The engagement involves technology build, AI automation, or execution team deployment
- The partner is seeking results, not staff augmentation
- The partner's market is Philippines, Southeast Asia, or the Philippine diaspora
Score each criterion: MET | PARTIALLY_MET | NOT_MET | UNCLEAR
Output:
{
"overall_fit": "STRONG | MODERATE | WEAK | NOT_A_FIT",
"criteria_scores": {"criterion": "score", ...},
"key_question_to_ask": "The one clarifying question that would change the fit assessment most",
"routing": "FOUNDER | TEAM | AUTO_DECLINE"
}
Inquiry:
[INQUIRY_TEXT]
13 — Legal Document Plain-Language Summary
Use case: Before signing contracts, get a plain-language summary of key terms and flags.
Summarize the following contract section in plain language for a non-lawyer founder.
Output format:
{
"what_this_section_does": "one sentence",
"key_obligations_on_us": ["list"],
"key_rights_we_gain": ["list"],
"risk_flags": ["any terms that are unusual, one-sided, or worth negotiating — empty list if none"],
"questions_for_lawyer": ["any provisions that need legal interpretation before signing — empty list if routine"]
}
Contract section:
[CONTRACT_TEXT]
Note: This summary is for initial orientation only. All contracts must be reviewed by a licensed attorney before execution.
14 — Social Post from Article (LinkedIn)
Use case: Convert a published insight article into a LinkedIn post draft.
Write a LinkedIn post based on the following article.
Rules (non-negotiable):
- First line must be a bold claim or counter-intuitive statement — not "New article"
- No engagement bait questions at the end ("What do you think?")
- No hashtag spam — maximum 3 relevant hashtags
- Derived directly from the article — do not add claims not in the article
- Maximum 250 words
- End with one concrete takeaway, not a call to comment
Article:
[ARTICLE_TEXT]
15 — Venture Performance One-Pager
Use case: Monthly one-pager summarizing a venture's performance for the portfolio review.
Generate a venture performance one-pager from the following data.
Format:
**[VENTURE_NAME] — [Month Year]**
Status: [ACTIVE / AT_RISK / PAUSED / EXITED]
**3 Things Working:**
1. [specific, quantified if possible]
2.
3.
**3 Things Not Working:**
1. [specific — not vague]
2.
3.
**Key Decision Needed:**
[One decision that must be made in the next 30 days — or "None" if stable]
**Numbers:**
[Metrics table: whatever is most relevant for this venture type]
**30-Day Focus:**
[2-3 sentences — what the team is optimizing for next month]
Data:
[VENTURE_DATA]
How to Use These Prompts in n8n Automations
Each prompt above is designed to be a node in an n8n workflow. The pattern:
- Trigger — webhook, schedule, form submission, or database change
- Data collection — Supabase query or HTTP request to get the data the prompt needs
- Prompt node — HTTP Request to your AI provider API (Anthropic, OpenAI) with the formatted prompt
- Output parsing — Code node to parse the JSON response
- Routing — If/Switch node based on classification or confidence
- Action — Write to database, send Slack message, trigger follow-up workflow, or email
All prompts are designed to output JSON. JSON output is reliable for programmatic use; natural language output is not. If you're integrating AI into an automation workflow, design your prompts to output structured JSON from the start — retrofitting later is expensive.
