scriptsUpdated·Falk Gottlob··updated ·9 min read

Build a Customer Feedback Pipeline in One Afternoon

Connect Zendesk, Slack, and app reviews into a single AI-powered insight engine. Step-by-step setup guide with code snippets.

researchautomationtoolkit
Helpful?

Build a Customer Feedback Pipeline in One Afternoon

The short version

This is a one-afternoon setup that aggregates customer feedback from Zendesk, Slack, and app store reviews into a single AI-processed digest. The architecture is three steps: extract (Zapier or a Python script hitting Zendesk and Slack APIs), process (Claude's API with a structured prompt that returns category, sentiment, urgency, segment, quote, and action item as JSON), digest (post to Slack at 7 AM or email). Total setup time is 2-3 hours, mostly waiting on API keys. Ongoing time is 5 minutes a day. By Day 2 morning, you're seeing customer patterns before they become problems. Code snippets are inline. Pick four sources where your customers actually talk to you and ship it tonight.

Customer feedback sits scattered everywhere: Zendesk tickets, Slack threads, G2 reviews, app store comments, NPS responses. You lose signal because the data exists in silos. By tonight, you'll have a system that automatically aggregates, processes, and summarizes feedback into actionable insights.

This isn't a complicated ETL pipeline. It's a pragmatic afternoon project that turns 15 minutes of setup and one simple script into daily competitive advantage.

Why This Matters (And Why You're Probably Not Doing It)

Most PMs check feedback sources manually. Monday morning: check Zendesk. Tuesday: scan the G2 review notification. Wednesday: someone mentions something in a customer call. By Friday, you've lost the pattern.

The teams that move faster have feedback flowing into their daily workflow automatically. They see patterns emerge in real time. When a feature ships and three customers report the same confusion within hours, they know it. When competitor feature X shows up in four support tickets, they see it.

This pipeline takes you from reactive to predictive. You're not waiting for Friday's customer call to learn something broke. You know Wednesday morning.

Step 1: Choose Your Feedback Sources (15 minutes)

Pick 4-5 sources where your customers actually talk to you or about you:

Primary sources (you control):

  • Zendesk, Intercom, or Help Scout (support tickets + email)
  • Slack channels where customers post directly
  • NPS surveys (if you send them)
  • In-app feedback widgets

Secondary sources (they're public):

  • G2, Capterra, or Trustpilot reviews
  • App Store / Google Play reviews
  • Twitter mentions and replies
  • Your product changelog comments

Optional but valuable:

  • LinkedIn (your company, your competitors)
  • Reddit (search your product name)
  • Blog comments

For this guide, I'll focus on the sources most PMs can connect in one afternoon: Zendesk, Slack, and app store reviews. The architecture works for all sources.

Step 2: Set Up Data Extraction (30 minutes)

You have three options depending on your technical comfort:

Option A: Zapier (No Code, Fastest)

  1. Create a new Zapier automation for each source.

  2. Zendesk → Google Sheets (append new tickets daily)

    • Trigger: New Ticket
    • Action: Add row to Google Sheet
    • Include: ticket ID, subject, description, status, priority, customer email
  3. Slack → Google Sheets

    • Use Slackbot to post to a dedicated channel (#feedback)
    • Zapier watches that channel for new messages
    • Action: Add row to same Google Sheet
  4. Google Play reviews → Google Sheets

    • Zapier doesn't have native Google Play integration
    • Use App Annie or Sensor Tower (they do have Zapier integrations)
    • Or manually export weekly (takes 3 minutes)

Cost: Free to $25/month depending on volume. Time to set up: 20 minutes. Maintenance: None.

Option B: API + Simple Script (Technical, Most Control)

If you have Slack access to a developer and 30 minutes, this is cleaner long-term.

Create a Python script that runs daily (via cron or AWS Lambda):

import requests
import json
from datetime import datetime, timedelta

# Configuration
ZENDESK_DOMAIN = "your-company.zendesk.com"
ZENDESK_API_KEY = "your_api_key"
SLACK_WEBHOOK = "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"

def fetch_zendesk_tickets():
    """Get all tickets updated in the last 24 hours"""
    yesterday = (datetime.now() - timedelta(days=1)).isoformat()
    url = f"https://{ZENDESK_DOMAIN}/api/v2/tickets.json"

    params = {
        "query": f'updated>={yesterday}',
        "per_page": 100
    }

    headers = {
        "Authorization": f"Basic {ZENDESK_API_KEY}",
        "Accept": "application/json"
    }

    response = requests.get(url, params=params, headers=headers)
    return response.json()['tickets']

def fetch_slack_feedback():
    """Get messages from #feedback channel"""
    # Requires Slack SDK: pip install slack-sdk
    from slack_sdk import WebClient

    client = WebClient(token="xoxb-your-slack-token")
    yesterday = int((datetime.now() - timedelta(days=1)).timestamp())

    messages = client.conversations_history(
        channel="C1234567890",  # #feedback channel ID
        oldest=yesterday
    )
    return messages['messages']

def compile_feedback():
    """Combine all sources"""
    zendesk_tickets = fetch_zendesk_tickets()
    slack_messages = fetch_slack_feedback()

    feedback_list = []

    for ticket in zendesk_tickets:
        feedback_list.append({
            "source": "zendesk",
            "id": ticket['id'],
            "content": f"{ticket['subject']}: {ticket['description']}",
            "timestamp": ticket['updated_at']
        })

    for msg in slack_messages:
        feedback_list.append({
            "source": "slack",
            "id": msg['ts'],
            "content": msg['text'],
            "timestamp": msg['ts']
        })

    return feedback_list

if __name__ == "__main__":
    feedback = compile_feedback()
    print(json.dumps(feedback, indent=2))
    # Next step: send to AI layer

Deploy this as a Lambda function (5 minutes) or cron job on your server. It runs daily, compiles all feedback into JSON, and passes it to the next step.

Cost: Free (if using existing infrastructure) Time to set up: 30 minutes (if you have a dev) Maintenance: Very low

Step 3: AI Processing Layer (The Brain)

This is where the magic happens. You take raw feedback and convert it into insights.

You'll use Claude's API or another LLM. The key is the prompt template. Here's exactly what works:

You are a product insights analyst. You will receive customer feedback from multiple channels (support tickets, reviews, Slack messages). Your job is to extract structured insights.

For each piece of feedback:
1. **Category**: What area of the product does this relate to? (e.g., Pricing, Onboarding, Performance, UX, Feature Request, Bug Report, Competitor Mention)
2. **Sentiment**: Positive, Neutral, Negative, or Mixed
3. **Urgency**: Critical (blocking customers), High (common complaint), Medium (isolated issue), Low (nice-to-have feedback)
4. **Customer Segment**: Enterprise, Mid-Market, Startup, Agency, or Unknown
5. **Exact Quote**: The most important 1-2 sentences from the feedback
6. **Action Item**: What should the product team do about this? (if applicable)

Process this batch of feedback and return valid JSON:

[FEEDBACK BATCH HERE]

Return only valid JSON in this format:
{
  "insights": [
    {
      "feedback_id": "string",
      "source": "zendesk|slack|review|nps",
      "category": "string",
      "sentiment": "positive|neutral|negative|mixed",
      "urgency": "critical|high|medium|low",
      "customer_segment": "string",
      "quote": "string",
      "action_item": "string or null"
    }
  ],
  "summary": {
    "total_feedback_items": number,
    "top_3_themes": ["string", "string", "string"],
    "critical_issues": [string],
    "competitor_mentions": [string],
    "feature_requests": [string]
  }
}

In your Python script, add this function:

import anthropic

def process_feedback_with_ai(feedback_list):
    """Send feedback batch to Claude for analysis"""
    client = anthropic.Anthropic(api_key="your_api_key")

    # Format feedback for Claude
    feedback_text = "\n\n".join([
        f"Source: {f['source']}\nID: {f['id']}\nContent: {f['content']}"
        for f in feedback_list
    ])

    message = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=2048,
        messages=[
            {
                "role": "user",
                "content": f"""You are a product insights analyst. You will receive customer feedback from multiple channels (support tickets, reviews, Slack messages). Your job is to extract structured insights.

For each piece of feedback:
1. **Category**: What area of the product does this relate to? (e.g., Pricing, Onboarding, Performance, UX, Feature Request, Bug Report, Competitor Mention)
2. **Sentiment**: Positive, Neutral, Negative, or Mixed
3. **Urgency**: Critical (blocking customers), High (common complaint), Medium (isolated issue), Low (nice-to-have feedback)
4. **Customer Segment**: Enterprise, Mid-Market, Startup, Agency, or Unknown
5. **Exact Quote**: The most important 1-2 sentences from the feedback
6. **Action Item**: What should the product team do about this? (if applicable)

Process this batch of feedback and return valid JSON:

{feedback_text}

Return only valid JSON with no markdown formatting:
{{
  "insights": [
    {{
      "feedback_id": "string",
      "source": "zendesk|slack|review|nps",
      "category": "string",
      "sentiment": "positive|neutral|negative|mixed",
      "urgency": "critical|high|medium|low",
      "customer_segment": "string",
      "quote": "string",
      "action_item": "string or null"
    }}
  ],
  "summary": {{
    "total_feedback_items": number,
    "top_3_themes": ["string", "string", "string"],
    "critical_issues": ["string"],
    "competitor_mentions": ["string"],
    "feature_requests": ["string"]
  }}
}}"""
            }
        ]
    )

    # Parse response
    import json
    return json.loads(message.content[0].text)

What this does: Takes messy feedback and categorizes it automatically. Identifies critical issues and emergent themes without manual review.

Step 4: Daily Digest Setup (15 minutes)

Once Claude processes the feedback, you need it somewhere you'll actually see it.

Option A: Slack Digest

def send_slack_digest(insights):
    """Post daily digest to Slack"""
    critical_issues = [i['quote'] for i in insights['insights']
                       if i['urgency'] == 'critical']
    feature_requests = insights['summary']['feature_requests']
    themes = insights['summary']['top_3_themes']

    message = f"""📊 *Daily Feedback Digest*

🚨 *Critical Issues ({len(critical_issues)})*
{chr(10).join([f"• {issue}" for issue in critical_issues[:3]])}

✨ *Top Feature Requests*
{chr(10).join([f"• {req}" for req in feature_requests[:3]])}

📈 *Emerging Themes*
{chr(10).join([f"• {theme}" for theme in themes])}

See full report: https://your-dashboard.com/feedback"""

    requests.post(SLACK_WEBHOOK, json={"text": message})

Option B: Email Digest

import smtplib
from email.mime.text import MIMEText

def send_email_digest(insights, recipient_email):
    """Send digest via email"""
    subject = "Daily Feedback Digest"
    body = f"""
    Critical Issues: {len([i for i in insights['insights'] if i['urgency'] == 'critical'])}
    Top Themes: {', '.join(insights['summary']['top_3_themes'])}

    Full details attached or visit your dashboard.
    """

    msg = MIMEText(body)
    msg['Subject'] = subject
    msg['From'] = "feedback@yourcompany.com"
    msg['To'] = recipient_email

    # Send via your mail service

Option C: A Simple Dashboard

If you want it always visible, add a simple HTML page:

<!DOCTYPE html>
<html>
<head>
    <title>Feedback Dashboard</title>
</head>
<body>
    <h1>Today's Feedback Summary</h1>
    <div id="digest"></div>

    <script>
        fetch('/api/feedback-digest')
            .then(r => r.json())
            .then(data => {
                document.getElementById('digest').innerHTML = `
                    <p>Critical Issues: ${data.summary.critical_issues.length}</p>
                    <p>Top Themes: ${data.summary.top_3_themes.join(', ')}</p>
                `;
            });
    </script>
</body>
</html>

Putting It All Together (The Full Workflow)

  1. Day 1, afternoon: Set up Zapier feeds OR deploy the Python script
  2. Day 1, evening: Add your AI processing function
  3. Day 2, morning: First digest lands in Slack/email
  4. Day 2 onward: You see customer patterns before they become big problems

Total setup time: 2-3 hours, mostly waiting for API keys. Ongoing time: 5 minutes per day to review digest. ROI: You catch friction points weeks before customer churn, spot feature requests that solve for 10 competitors, and see when messaging resonates.

Common Gotchas

Duplicate feedback: The same issue will appear in Zendesk, reviews, and Slack. Build a simple deduplication layer - group insights by keyword similarity, then weight by count.

Sentiment isn't binary: Most real feedback is mixed. A customer loves your UI but hates your pricing. Your prompt should capture both.

Action items matter: Don't just categorize. For every critical issue, your AI layer should suggest a concrete next step ("add retry mechanism," "clarify docs," "revisit pricing model").

Privacy: Make sure you're compliant stripping PII before storing feedback. That email address in a Zendesk ticket shouldn't persist in your JSON.

Next Steps

Once you have 2 weeks of data flowing, you'll start seeing patterns human review would never catch. That's when you can build on this: alerts for spikes in specific categories, trend analysis over time, or even feedback→PRD generation.

For now, get it running today. Check your digest tomorrow morning. You'll immediately see why this matters.

Sources: Zendesk, Slack, G2, Zapier, Anthropic Claude API, App Annie, Sensor Tower.

Share this post

Download the artifact

Ready to use. Copy into your project or share with your team.

Download

Also on Medium

Full archive →

Keep Reading

Posts you might find interesting based on what you just read.