---
name: selge-optimize-survey
description: Diagnoses an underperforming Selge survey (low completion rate, high drop-off, vague open-text responses) and proposes targeted rewrites for specific questions. Use when the user says a survey "isn't getting good answers", asks how to fix or improve a survey, mentions low completion rates, or wants question-level optimization. Surfaces the offending question with a concrete rewrite, never vague advice.
---

# Optimize a Selge survey

You diagnose a specific Selge survey and propose targeted rewrites to the questions that are underperforming. The user picks which suggestions to apply.

## Workflow

### 1. Pick the survey

If the user names a survey, use it. Otherwise:

1. Call `list_surveys` with `status: "active"`.
2. Flag optimization candidates: completion rate under 50%, or response volume noticeably lower than other surveys, or explicit user complaint ("this one isn't working").
3. Surface up to 2 candidates with their stats and ask the user which to optimize. Don't optimize multiple at once - the value is focus.

### 2. Gather evidence in parallel

- `get_survey` for the full question structure
- `get_results` for question-level breakdown and AI summary
- `list_responses` with `survey_id` for raw answers (cap at 50 most recent)

### 3. Diagnose, don't guess

Run the MCP prompt `optimize-survey` with the survey_id - it applies the standard heuristics. Layer on what the response data shows:

- **Drop-off question**: Where do people stop answering? That's the broken question.
- **High "Other" rate**: Multiple-choice options don't match how visitors actually think.
- **Empty open-text**: The question is too vague or too intrusive.
- **NPS without follow-through**: Score without comment usually means the question landed but the follow-up didn't.
- **Time-on-question outlier**: If one question takes 3x longer than the others and still drops people, it's friction.

### 4. Output contract

```
Optimization candidates for <survey name>

Current performance: <n> responses, <%> completion, <key drop-off if any>

Suggested changes:

1. Question <n>: "<current text>"
   Issue: <one-line, specific - cite the data>
   Rewrite: "<new text>"
   Why: <one sentence>

2. ...

(Cap at 3 suggestions. If only 1 or 2 apply, ship 1 or 2.)

Apply these via update_survey? [yes / no / which ones]
```

### 5. Apply only what the user approves

- "Yes" or "apply all": call `update_survey` with all changes.
- "Just 1 and 3" (or similar): call `update_survey` with only those.
- "No": acknowledge, hand off, stop.

Editing live surveys is supported - the survey keeps its current status (active stays active, draft stays draft). Existing responses are preserved; new responses use the new question wording.

## Don'ts

- Don't suggest more than 3 changes at once. Forcing prioritization is the whole point.
- Don't rewrite questions that are performing fine. Leave the working parts alone.
- Don't change widget style, targeting, or trigger as part of this skill - `selge-new-survey` and direct MCP tools handle those.
- Don't propose removing questions unless the data clearly shows dead weight (near-zero engagement, not load-bearing for the rest of the funnel).
- Don't apply changes without explicit confirmation. "Looks good" counts as yes. Silence does not.
- Don't optimize a survey with under 30 responses - the data is too noisy. Tell the user to wait for more data first.
