2
0

refactor(pages): remove inline prompts from A/B test AI calls
Some checks failed
Build and Release / Create Release (push) Successful in 0s
Build and Release / Integration Tests (PostgreSQL) (push) Successful in 2m54s
Build and Release / Unit Tests (push) Successful in 6m17s
Build and Release / Lint (push) Successful in 6m26s
Build and Release / Build Binaries (amd64, linux, linux-latest) (push) Successful in 3m25s
Build and Release / Build Binaries (amd64, windows, windows-latest) (push) Successful in 9h3m49s
Build and Release / Build Binaries (amd64, darwin, macos) (push) Successful in 5m57s
Build and Release / Build Binaries (arm64, darwin, macos) (push) Successful in 5m46s
Build and Release / Build Binary (linux/arm64) (push) Failing after 10m31s

Remove inline instruction prompts from experiment generation and analysis. These instructions are now defined in ABTestGeneratePlugin and ABTestAnalyzePlugin, eliminating duplication and improving maintainability.
This commit is contained in:
2026-03-07 16:12:14 -05:00
parent 85ab93145e
commit cb2791709e
2 changed files with 0 additions and 30 deletions

View File

@@ -81,19 +81,6 @@ func AnalyzeExperiment(ctx context.Context, exp *pages_model.PageExperiment) (*A
Context: map[string]string{
"experiment": string(experimentJSON),
"variants": string(variantsJSON),
"instruction": `Analyze these A/B test results. Look at conversion rates,
impression counts, and event distributions across variants.
Determine if there is a statistically significant winner.
Return valid JSON:
{
"status": "winner" or "needs_more_data" or "no_difference",
"winner_variant_id": <ID of winning variant, or 0>,
"confidence": <0.0 to 1.0>,
"summary": "Brief human-readable summary of results",
"recommendation": "What to do next"
}
Require at least 100 impressions per variant before declaring a winner.
Use a minimum 95% confidence threshold.`,
},
})
if err != nil {

View File

@@ -96,23 +96,6 @@ func GenerateExperiment(ctx context.Context, repo *repo_model.Repository, config
"landing_config": string(configJSON),
"repo_name": repo.Name,
"repo_description": repo.Description,
"instruction": `Analyze this landing page config and create an A/B test experiment.
Return valid JSON with this exact structure:
{
"name": "Short experiment name",
"variants": [
{
"name": "Variant A",
"config_override": { ... partial landing config fields to override ... },
"weight": 50
}
]
}
Focus on high-impact changes: headlines, CTAs, value propositions.
Keep variants meaningfully different but plausible.
The control variant (original config) is added automatically — do NOT include it.
Return 1-3 variants. Each variant's config_override should be a partial
LandingConfig with only the fields that differ from the control.`,
},
})
if err != nil {