A/B Testing
DaisyChain provides powerful A/B testing capabilities to help you optimize your referral campaigns.
Overview
Key Features
Campaign Variants
Test different reward structures and rules
Traffic Allocation
Control traffic distribution between variants
Statistical Significance
Automated significance calculation
Performance Metrics
Track and compare variant performance
Creating Variants
Create campaign variants to test different reward structures:
const variant = await client.createCampaignVariant(parentCampaignId, {
name: 'Higher Reward Test',
rewardType: 'fixed_amount',
rewardAmount: 50.00,
trafficAllocation: 50
});
Variant Properties
name
: Descriptive name for the variantrewardType
: Type of reward (‘fixed_amount’ or ‘percentage’)rewardAmount
: Amount or percentage of the rewardtrafficAllocation
: Percentage of traffic to direct to this variant (0-100)
Traffic Management
Control how traffic is distributed between variants:
// Update traffic allocation for a variant
await client.updateCampaignVariant(variantId, {
trafficAllocation: 25 // Reduce to 25% of traffic
});
Results Visualization
DaisyChain provides comprehensive visualization tools to help you understand the performance of your A/B tests.
Dashboard Metrics
The A/B testing dashboard displays key metrics for each variant:
- Conversion Rate: Percentage of referrals that convert to customers
- Average Order Value: Average purchase amount from referred customers
- Revenue Per Referral: Average revenue generated per referral
- Statistical Significance: Confidence level in the results (95% is typically considered significant)
Comparative Charts
The dashboard includes several visualization types:
- Bar Charts: Compare key metrics side-by-side between variants
- Time Series: Track performance over time to identify trends
- Funnel Visualization: See where participants drop off in the referral process
- Heat Maps: Identify peak performance times and patterns
Example: Interpreting Results
// Fetch A/B test results
const results = await client.getTestResults(testId);
// Example result structure
{
variants: [
{
id: "var_123",
name: "Control",
metrics: {
participants: 1000,
conversions: 150,
conversionRate: 15.0,
averageOrderValue: 85.50,
revenuePerReferral: 12.83
}
},
{
id: "var_456",
name: "Higher Reward",
metrics: {
participants: 1000,
conversions: 200,
conversionRate: 20.0,
averageOrderValue: 82.75,
revenuePerReferral: 16.55
},
improvement: {
conversionRate: "+33.3%",
revenuePerReferral: "+29.0%"
},
significanceLevel: 97.5 // 97.5% confidence
}
]
}
Exporting Data
You can export A/B test results for further analysis:
// Export test results to CSV
const csvUrl = await client.exportTestResults(testId, {
format: 'csv',
dateRange: {
start: '2023-01-01',
end: '2023-01-31'
}
});
Test Status
Manage the lifecycle of your A/B tests:
// Start test
await client.updateTestStatus(variantId, 'running');
// Pause test
await client.updateTestStatus(variantId, 'stopped');
// Test automatically completes when statistical significance is reached
Status Types
running
: Test is active and accepting trafficstopped
: Test is pausedcompleted
: Test has reached statistical significance
Performance Tracking
Monitor variant performance through various metrics:
const metrics = await client.getVariantMetrics(variantId, {
startDate: '2024-01-01',
endDate: '2024-03-31'
});
Available Metrics
- Conversion rate
- Total referrals
- Total revenue
- Average order value
- Statistical significance
Best Practices
-
Test Design
- Test one variable at a time
- Run tests for sufficient duration
- Use meaningful sample sizes
-
Traffic Allocation
- Start with equal distribution
- Adjust based on early results
- Consider seasonal variations
-
Analysis
- Wait for statistical significance
- Consider secondary metrics
- Document learnings
-
Implementation
- Start with simple tests
- Gradually increase complexity
- Monitor for technical issues
Advanced Testing
Multivariate Testing
Test multiple variables simultaneously:
const multiVariantTest = await client.createMultiVariantTest({
name: "Comprehensive Test",
variables: [
{
name: "reward_amount",
values: [10, 25, 50]
},
{
name: "reward_type",
values: ["fixed_amount", "percentage"]
}
]
});
Targeting Specific Segments
Target tests to specific user segments:
const segmentedTest = await client.createCampaignVariant(campaignId, {
name: "New Customer Variant",
rewardAmount: 30,
targeting: {
userType: "new_customer",
countries: ["US", "CA"],
minPurchaseCount: 0,
maxPurchaseCount: 0
}
});
Example Implementation
Here’s a complete example of implementing an A/B test:
// 1. Create a variant
const variant = await client.createCampaignVariant(campaignId, {
name: 'Higher Reward Test',
rewardType: 'fixed_amount',
rewardAmount: 50.00,
trafficAllocation: 50
});
// 2. Monitor performance
const metrics = await client.getVariantMetrics(variant.id);
// 3. Update traffic allocation if needed
if (metrics.conversionRate > parentMetrics.conversionRate) {
await client.updateTrafficAllocation(variant.id, {
trafficAllocation: 75
});
}
// 4. Handle test completion
client.on('test.completed', async (event) => {
const { variantId, isWinner } = event;
if (isWinner) {
// Apply winning variant settings to parent campaign
await client.applyWinningVariant(variantId);
}
});
Troubleshooting
Common issues and solutions:
-
Insufficient Traffic
- Increase test duration
- Adjust traffic allocation
- Combine similar variants
-
Inconclusive Results
- Check sample size
- Verify tracking setup
- Review test parameters
-
Technical Issues
- Monitor error rates
- Check traffic distribution
- Verify conversion tracking
Next Steps
- Analytics & Campaigns - Learn about campaign analytics
- Program Rules - Configure campaign rules
- API Reference - Explore available endpoints