Why You Shouldn’t Let Perfect Be the Enemy of Experimentation — Dan Pannasch, RevenueCat
Sub Club by RevenueCat12/07/22 • 63 min
On the podcast I talk with Dan about how to design experiments that answer the right questions, common A/B testing pitfalls to avoid, and how a simple checklist might just save your complex experiment.
Top Takeaways
🍞 Conclusions from tests sometimes go stale faster than you realize
👌 Minimizing the cost of running tests will improve decision making
🤪 Check your sanity — or don’t live and die by statistical significance
About Dan Pannasch
👨💻 Senior Product Manager at RevenueCat
💪 Dan saw what experimentation looked like across a portfolio of app businesses when his previous company TelTech’s success led to an acquisition by IAC. He joined RevenueCat in May 2022 and leads the Experiments project.
💡 “You could change the color [of the buy button in A/B testing and] release it in the new application. And if you can't tell which one won [with users], then you learned that it doesn't matter. You didn't learn which one won, but you did learn that it doesn't matter for you right now.”
👋 Twitter | LinkedIn
Links & Resources
‣ Join the RevenueCat team
‣ Sub Club interview with Blinkist’s Jaycee Day
‣ RevenueCat’s Experiments tool
Follow us on Twitter
‣ David Barnard
‣ Jacob Eiting
‣ RevenueCat
‣ Sub Club
Episode Highlights
[2:18] Experimentation: What is app experimentation and why should you do it? The right decision making, considering impact on variables, and risk mitigation are everything when it comes to user experience.
[9:04] Taking a page from DuoLingo’s playbook: Product strategy and intuition naturally limits possibilities — and it’s not the place for A/B testing. Microdecisions within deliverables are testable, and then it’s just cost-benefit analyses.
[14:04] The early days: The cost-benefit analysis should pervade every stage of the process, from early growth and beyond. Trying to design the perfect A/B test isn’t always possible when customers are begging you for.
[19:20] Paywall plays: Where you put the paywall is a tough decision. But there are strategies for implementation and risk mitigation.
[24:35] Testing 101: Be sure to write down the hypothesis before testing so that you can measure impact. Unexpected results — where you learn the most about variables — depend on it.
[28:05] Follow it up: Dan shares his thoughts on user follow ups to boost quantitative data with qualitative data. Sometimes talking to users can be very powerful.
[31:13] Sanity check: How to do a testing plan, as done by Dan during his time as a PM at TelTech. Plus, an explanation of statistical significance.
[39:53] Impact and intuition: To understand user experience impact and product intuition, it’s critical to ensure the design aligns with the value proposition.
[42:22] Actual testing: There are pitfalls and screw-ups to watch out for when testing (and even before).
[46:33] Analyzing the results: Dan provides his overview for analyzing the results after running the experiment. Second and third order effects are important but not always immediately obvious.
[48:41] The Experiments product: RevenueCat’s new tool enables easy A/B testing for two offerings. The data helps you analyze the full subscription lifecycle to understand which variant is producing more value for your business.
[55:55] Bugs: No product will ever be perfect, but Experiments offers app developers the tools and confidence to make sure it’s at least most of the way there.
12/07/22 • 63 min
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/sub-club-by-revenuecat-266436/why-you-shouldnt-let-perfect-be-the-enemy-of-experimentation-dan-panna-31744324"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to why you shouldn’t let perfect be the enemy of experimentation — dan pannasch, revenuecat on goodpods" style="width: 225px" /> </a>
Copy