Apollo.io is a sales intelligence platform used by over 1 million sales folks across 160,000 companies. Sales teams use Apollo to generate leads, manage email sequences and track outbound sales analytics.
Apollo’s sales team obsessively uses its own product. In the early days, this enabled the product team to quickly access product feedback and iterate continuously. This tight feedback loop was crucial to build a product loved by its users helping Apollo to scale impressively from 0 to 1 million users in under 3 years!
As the company grew to serve more customer segments, from founders in startups to large sales teams in enterprises, Apollo.io’s product team struggled to create a similar tight feedback loop for all the customer segments.
1. No quick, easy way to get accurate user feedback at scale. Conducting user interviews is a very high effort to frequently test assumptions.
2. Traditional email surveys generate not-so-reliable data i.e. the most relevant users did not respond to emails, feature usage recollections were vague, etc. Moreover, frequent email surveys risked creating a negative brand perception. A low response rate of ~3% meant emailing thousands of users for a single research study.
3. The real, active users were hard to reach. Customer success teams interacted more with the decision influencers than the real users. Gatekeepers (eg. sales leadership) in companies preferred all communication from Apollo.io to their teams go through them, creating high friction.
4. Small and mid-sized accounts did not have customer success managers, so requests and issues from these teams weren't actively solicited.
The VP of Product, Krishan, was of the view that as the user base scales beyond a threshold, feedback loops would break down and product development will not be aligned to solving customer problems. To preemptively fix this, Apollo.io moved to use an in-product user research tool like Blitzllama.
1. Testing assumptions faster and aligning new features and product ideas closer to solving customer problems. Evaluating how users feel about new features becomes much easier with in-product prompts that launched right after a user uses the feature.
2. Accessing customer inputs for critical product decisions like sprint planning.
3. Setting up feedback hooks inside the product to continuously collect feedback post-critical actions like uninstalling the Chrome Extension (example below) or NPS every 30days.
Product and growth outcomes
The product team, and soon the growth team started using too, saw tangible impact across the business. Among the many achieved outcomes, the highlights included:
1. Increase onboarding conversion by discovering previously unknown product gaps, then validating improvements post implementation
2. Rapidly identifying and validating new feature ideas that incorporated the recent developments in AI, and then prioritizing the features.
3. Improving the experience with the Chrome Extension by identifying pain points and improvement opportunities.
In the first 4 months of using the platform, the teams generated over 50 actionable and highly contextual insights.