Marketing experimentation for startups

Reuben O’Connell

Google Analytics
Hotjar
Mixpanel
Here's an outline of a basic way that I start building and running experimentation for marketing acquisition at startups. It doesn't always follow this order, and it's worth being aware that different businesses have different resources (=data) available to enhance to this kind of work.
Identifying existing funnel touch points: I start by mapping out the existing customer journey for a purchaser or lead coming through marketing channels, recognising what's in place at key stages like awareness, consideration, and decision. This helps me target each stage effectively with tailored experiments.
Developing hypotheses based on insights: I formulate hypotheses for each stage of the funnel using a mix of marketing principles, analytics, and user feedback. I then take these hypotheses and look at experiments I've ran before, or ideate something completely new that will hit that funnel stage. For example, testing the impact of testimonials on ad engagement helps us understand their value in the speeding up the movement through the consideration stage.
When I ideate experiments in this phase, I draw up the 'IDE' and the 'MVE' - what's our ideal experiment? And then, what's the version of it that's minimal, essentially and MVP, that can give us insight. The reason why I do this is because it can help us get insights quicker. If we run an MVE, however, we'll run the IDE based on performance later down the line.
Defining experiment-level KPIs: I establish clear KPIs for each experiment, such as click-through rates for ad campaigns and conversion rates for interactive demos, ensuring they align with our strategic goals and correlate with a higher output of the master KPI - usually sales, AOV or # qualified leads.
ICE prioritisation: I prioritise experiments using the ICE method, focusing on those with the highest potential impact, confidence in success, and ease of implementation. As I have some technical skills, the ease of implementation will usually be high for anything that isn't focused on changing the purchase flow.
Launching experiments in phases: I start with experiments that promise the most insight and adjust based on real-time data. This phased approach allows for agile marketing and continuous improvement. Once an experiment has a good insight, it isn't closed, see 'Analysing and Iterating' below.
Monitoring and measuring results: I continuously track the results of each experiment against established KPIs using robust analytics tools, making data-driven decisions to guide future efforts.
Analysing and iterating: I delve into the data collected to analyse both direct results and underlying behaviours. This iterative process informs ongoing adjustments to our marketing strategies. Structured experimentation is nothing new, but is rarely done well.
If we have a positive insight, we look at what behaviours and/or emotions drove this insight. For example, if the testimonial ad experiment is successful, we can reasonably conclude or consider the following as drivers: there is a lack of trust, there is a lack of understanding around user interaction, there is poor feature -> benefit/outcome content already in play, etc., etc.
This then allows us to draw up more experiments, off a proven hypothesis, that tackle these objections/blockers on different channels, or in different formats.
Scaling successful experiments: This happens with the previous step. Once an experiment proves successful, I scale it up, extending it to more channels and audiences to maximise its impact. As mentioned, if we ran an 'MVE', this will be the 'IDE'. It's important
Documenting and sharing learnings: I compile all findings and insights from these experiments into a comprehensive playbook. This documentation not only guides our future strategies but also serves as a valuable resource for other teams looking to implement a similar approach, or enhance their output based on real user and visitor insights.
Partner With Reuben
View Services

More Projects by Reuben