**What's important to know:** Before scaling, focus on behavioral metrics that validate real user value—activation, word-of-mouth adoption, and qualitative feedback—rather than vanity metrics like downloads. Early-stage apps need leading indicators (time to first value, engagement patterns) that predict retention and willingness to pay, not lagging indicators that only emerge after months of data collection.
It's such an exciting moment when you launch your app. You finally put it out into the world… now what?
It's tempting to focus on what feels productive: building new features, testing prices, running ads. But if you don't know whether you're actually solving a real problem, most of that is just optimisation in the dark.
The metrics that feel good in that early phase — downloads, signups, even revenue — don't necessarily tell you if you're on the right track. They tell you people are showing up and trying the app, not whether they're getting real value from it.
I see this constantly in growth audits. One client had over 90% onboarding completion, which sounds fantastic. But most users were gone by day two. Onboarding wasn't the issue, but value was.
In this first, exciting phase of launching your new app or finding product-market fit, it's critical to focus on metrics that reflect real value creation, not the shiny, ego-boosting ones.
We'll talk about how to make that mindset shift, and from there:
We'll also cover some traps and risks along the way, such as why retention alone doesn't equal product-market fit.
And despite focusing on metrics here, please (she begs politely) don't ignore qualitative signals. At this stage, qualitative feedback will guide you just as much — if not more — than quantitative data. You don't have the luxury of large data sets yet, and that's okay.
From there, I'll help you define what finding product-market fit actually looks like for your app, so you leave with a clear list of metrics to focus on. This is the super-quick crash course version. If you want more depth, I highly recommend checking out my free course, How to make an app people will pay for, where I walk through this in detail and give you a full Product Strategy Canvas to apply to your own app.
Focusing on the right metrics helps you learn faster and move forward with confidence. At this early stage, that clarity makes everything feel a lot less overwhelming.
What really matters pre-product market fit
Product-market fit means building an app that solves a specific problem for a specific audience — with a solution that genuinely fits their needs.
It's the point where you're no longer persuading people to use your product — they want it. They love it, get consistent value from it, and choose to keep using it.
What finding product-market fit is about
The reality is that pre-product-market fit isn't about growth.
Yes, you need some growth, enough to generate data, learn, and validate what's working. But the core questions you're trying to answer are:
- Does my solution actually solve the problem?
- Who values it the most?
This is what Eric Seufert calls The Growth Trap.
Ironically, an over-focus on growth won't help you grow. Product-market fit will.
When you fixate on growth metrics too soon, you might achieve short-term spikes, but not long-term, sustainable growth.
Instead, shift your mindset. At this stage, your job is to learn and to look for strong signals that product-market fit is emerging.
And here's the good news: there are so many metrics you could track that I give you full permission to ignore most of them.
What metrics to ignore
It's buzzword bingo for an early-stage founder; millions of metrics and fancy phrases thrown at you, from Cost of Acquisition to Lifetime Value. The problem? Not all of them actually matter at this stage.
So here are some of the most common metrics I give you 100% permission to ignore, or at least not obsess over:
- Total downloads: Just because people download your app doesn't mean they even open it or get value from it.
- Total signups: Same idea.
- Social media followers: Great for the ego, but meaningless if it isn't actually driving awareness or value.
- App store ranking: Might help growth a little, but it tells you nothing about whether you're solving a real problem.
- Day 1 download spikes: Don't over-interpret early spikes; first users often behave differently than your eventual audience.
And that's the key at this stage: are a specific group of users getting repeated value? If a metric doesn't help answer that, toss it out.
Now, there are metrics that reflect value, but at launch, you often can't measure them yet.
What metrics should you measure instead?
With this mindset shift, the metrics that matter pre-product-market fit are behavioral. It's not about whether users showed up; it's about whether they did the things that indicate they're actually getting value.
Together, these behaviors give you a clear picture of where users are dropping off and what you need to focus on.
The key concept here is activation. If you're activating users, they're more likely to retain and pay. Activation has a domino effect in early-stage startups. People won't pay for an app they haven't truly experienced; they haven't reached the moment where the value clicks.
You're looking for the behaviors that, over time, predict whether someone will stay and pay. Look for patterns across at least 2–3 cohorts before drawing conclusions. Even with small numbers, consistency matters more than volume.
Time to first value and time to core value
Two useful activation metrics building on product-led growth thinking popularised by Wes Bush:
- Time to first value – Did the user experience something valuable?
- Time to first value might be the time to complete their first meditation session
- Time to core value is when they've meditated at least four times in a week and started building a routine
The goal isn't to over-index on speed, as some actions naturally take time. But in the early days, it usually takes too long, so you need to help users reach that value moment faster. It's more about relative timing than absolute speed.
For example, in a food-scanning app, users who scanned at least 7 foods in a week were much more likely to stay than those who took 2-3 weeks to reach the same milestone.
The scanning feature was the main way users could check whether a food was safe to eat.
So the team could focus on how to help more users complete that action within a defined period, rather than just letting it happen organically, e.g., more in-app nudges to scan, examples of what you can scan with it, etc.
Active users
Alongside activation, having a measure of active users, defined by meaningful behaviors rather than just app opens, is extremely important.
Tracking whether you're activating a higher percentage of users and doing it faster gives you a clear signal that you're on the right track.
And, as always, this should be tied to the key behaviors you've identified as indicators of value.
Percentage of customers through word of mouth
Referrals are huge. If 15% or more of new users come through referrals, that's a strong signal of product-market fit.
Word of mouth takes time to build in a new app, but if you start seeing more people talking about your product and your percentage of users acquired through word of mouth grows, that's another clear positive signal.
What you can't measure yet at launch
We've covered what you should ignore and what you should focus on.
Not to add confusion, but there are also valuable metrics that are hard to measure at launch:
- Retention curves need weeks
- Realized lifetime value per customer can take months to start to pin down
- Same with sustainable growth
Instead, focus on leading indicators — the behaviors that signal value early — rather than lagging indicators, which reflect outcomes further down the line.
Leading indicators: activation, early retention, qualitative feedback
Lagging indicators: lifetime value, revenue, long-term retention
Leading indicators act as your early warning system. They show whether something might be off track before the final results are in, letting you be proactive rather than reactive.
Retention
One of the most common metrics people tell you to track in this early phase is retention.
While it takes time to really understand your retention curves, you can start by looking at 7- and 30-day retention.
But here's the trap: over-indexing on retention too early can be dangerous.
Apologies for the upcoming rant, but honestly, it felt much-needed. We can't talk about pre-PMF metrics without diving deeper into the risks of equating retention with PMF.
The retention trap
Often, it feels like product-market fit is the same as retaining customers. But you can keep people around without actually solving their problem.
Strange, but I've seen it happen. Good retention can make an app feel like it has product-market fit before it really does.
There are four ways this can happen.
1. Gamification over value
The first trap is relying on mechanics — streaks, reminders, badges — to drive retention without delivering real value. These are classic forms of gamification.
I've experienced this myself. I got completely hooked on a game, loving the streaks that kept me coming back. But eventually, I realized I wasn't really enjoying myself anymore. I was returning because of the mechanics, not the value. And yet, I couldn't bring myself to delete the app until I quit cold turkey.
2. Retention is driven by a small group of power users
This isn't inherently bad, but it's something to watch closely.
It's all about balance: are you too specific, or is your core group too small to scale? If growth only happens within this small group, your product-market fit might not be ready to scale.
You need to be specific enough to stand out, but not so niche that you're building for only a handful of users who aren't representative of a larger market.
3. Pricing that masks weak product-market fit
This happens when heavy discounts or extended trials are offered. Sure, it attracts a lot of users, but often bargain hunters. You might see strong retention numbers from users who signed up for an annual subscription at a very low price, but they're not actively using your app.
4. Annual subscriptions can delay churn rather than prevent it
Annual subscriptions are great for cash flow and have plenty of benefits. But if most users aren't actively engaging with your app, locking them in doesn't mean they value it.
The lesson: commitment isn't the same as conviction. Someone locked into an annual plan is not the same as someone who would be devastated to lose your app.
To avoid this trap:
- Pair retention data with engagement data: if users are renewing but not actually using the app, dig deeper before celebrating
- Rely on qualitative signals to understand your users' experience and motivations
Qualitative signals to focus on
In the early days, you don't have massive numbers, and that's completely normal. With small sample sizes, it's hard to distinguish noise from signal, and the variance in metrics can make even the steadiest founders nervous.
This is why qualitative signals are extremely valuable. Remember: product-market fit is qualitative first, quantitative second. You'll feel it before you can measure it: users reaching out unprompted with feedback, telling you how much they love it, asking when features are coming, referring friends without being asked. It's little moments like that, where growth also feels easier that tell you long before you hit statistical significance that you're on the right track.
The Sean Ellis test
Sean Ellis studied hundreds of startups to find what separated the ones that went on to succeed. He discovered that successful startups typically had at least 40% of users who would be very disappointed if the product no longer existed.
He created a simple PMF test to measure this:
Ask users: 'How would you feel if you could no longer use [app name]?'
That second question is critical; it helps you understand what actually drives product-market fit, even if your sample size is too small for statistical significance.
That second question means that even if you don't have enough data for significance, you can start to understand what drives PMF.
A few practical tips:
- You need enough responses to get meaningful insight: at least 100 for a general sense, and 500–1,000 if you want to segment by signup reason, main feature used, or other factors
- Survey the right users at the right time, ask those who should have reached their aha! moment:
This test gives you a qualitative measure of product-market fit, helping you identify both the level of engagement and the reasons behind it.
Net Promoter Score (NPS)
The Net Promoter Score correlates strongly with this. For example, Ladder found that users who said they'd be 'somewhat disappointed' had much lower NPS scores than those who said 'very disappointed'.
It can be valuable to ask, alongside your PMF question, why users would or wouldn't promote your app. This gives insight not only into how much people value your product, but also what drives advocacy and highlights areas you can improve to turn more users into promoters.
User interviews
If you don't have enough users for the Sean Ellis test or measuring NPS, the best recommendation I can give you is to conduct user interviews to hear first hand from customers why they love and don't love your product.
Starting with even just 5-10 Jobs-to-be-Done (JTBD) interviews will teach you so much about:
I recommend prioritizing users with whom you suspect you might have product-market fit as well as recently churned users with a similar need. That way you can understand what the difference is and what to prioritize.
The role of ASO metrics: when discoverability becomes relevant
While this guide focuses on pre-PMF metrics, it's worth noting that once you've validated product-market fit, App Store Optimization (ASO) metrics become critical to sustainable growth. ASO works differently across platforms: on the App Store, your title, subtitle, and keyword field are the highest-leverage ranking factors, while on Google Play, the short description carries disproportionate weight compared to other fields.
Research shows that metadata changes on the App Store produce measurable ranking shifts within 1-3 days, not the 14-day wait previously assumed. This faster feedback loop means that once your product is solid, you can rapidly test different keyword positioning and messaging to improve organic discoverability.
However, at the pre-PMF stage, optimizing ASO without first validating that users actually want what you've built is premature. Focus first on proving value; worry about ranking higher in search results once you know people will stick around and pay.
Similarly, while app store category positioning and visual assets (icons, screenshots) influence both discoverability and conversion rates, these optimizations serve users who have already found you. If those users are leaving immediately because the core product isn't right, no amount of ASO work will help. The metrics covered earlier in this guide — activation, engagement, word of mouth — are what determine whether ASO efforts will actually move the needle.
- RevenueCat Blog: Stop measuring downloads: what to track before product-market fit — base article
- Asodesk Blog: Machine Learning Model Analysis of App Store Optimization Iterations
- AppFollow Blog: ASO Ranking Factors: The Complete Guide for 2026
- [AppFollow Blog: ASO vs SEO: Key Differences Between