The industry is asking better questions
Practitioners are raising more precise questions about what actually moves rankings. The shift from "does this work?" to "under what conditions does this work?" signals a broader maturation. We are seeing fewer blanket statements and more context-aware analysis โ a necessary evolution as both Apple and Google layer in more sophisticated signals.
The core question remains: which actions correlate with ranking improvement, and which are statistical noise or outright counterproductive? As more teams track iterations systematically, patterns are emerging that do not always match conventional wisdom.
Screenshots are now indexed metadata on iOS
One of the most significant changes in the past year: Apple now appears to extract and index visible text from screenshot captions. Apps have begun ranking for keywords that appear nowhere in the title, subtitle, or hidden keyword field โ only in the captions overlaid on screenshot images.
This was first observed in mid-2025 and has since been confirmed through controlled experiments. An app targeting "track sleep" started ranking for that phrase after adding it to a screenshot caption, with no other metadata changes. The same pattern has repeated across categories and markets.
What this means for practice:
- Screenshot captions are no longer purely conversion assets; they are now keyword metadata delivered visually
- The first three screenshots carry the most weight, as they appear in search result previews before users tap in
- Captions should be optimized for readability (large, high-contrast text) to ensure reliable extraction
- Each screenshot should target one clear keyword theme rather than attempting to stuff multiple terms into a single image
Download velocity now matters more than cumulative volume
Both stores have shifted weighting toward download velocity โ the rate of install accumulation over a short window โ rather than total historical download counts. An app gaining 1,000 installs in 24 hours will rank higher than one that accumulated the same number over 30 days, all else equal.
This explains why launch bursts and coordinated campaigns generate such pronounced ranking spikes. It also means that sustaining a ranking position requires sustained install momentum, not just a one-time push. Rankings decay when velocity drops, even if total downloads remain high.
Implications:
- Timing updates and campaigns to concentrate installs into narrow windows amplifies ranking impact
- Organic install velocity carries more algorithmic weight than paid installs, though paid volume still contributes
- Country-specific velocity affects rankings independently in each market โ a global launch does not help your US ranking unless US-specific velocity increases
In one analyzed dataset of over 500 Android iterations, adding a keyword to the short description produced ranking improvements in 84% of cases. By contrast, adding the same keyword only to the title yielded improvements in just 16% of cases. Removing a keyword from the short description resulted in zero ranking improvements.
This finding contradicts the widely held assumption that title carries the highest weight on Google Play, mirroring iOS title behavior. The data suggests Google's algorithm places greater emphasis on the short description โ at least for functional (non-brand) keywords.
What changes:
- The short description should be treated as the primary keyword placement field for Android, not a secondary messaging space
- The title remains important for brand and top-level category signals, but may not be the optimal location for long-tail or feature-specific keywords
- Full description text indexing still matters, but short description placement appears to trigger stronger ranking signals
For example, an app targeting "strategy game" saw comparable or stronger ranking when the term was split across fields ("strategy" in title, "game" in subtitle) versus including "strategy game" verbatim in the title. Iterations where keywords appeared in partial or soft-match form (e.g., "tactical game" targeting "strategy game") showed a 60% improvement rate, comparable to exact matches.
This aligns with how modern search systems work: lemmatization (reducing words to root forms) and semantic matching are standard. Writing "run" will match queries for "running," "runner," and "runs." Forcing exact phrasing can actually limit coverage if it results in unnatural metadata that does not capture related query variations.
Practical takeaway:
- Prioritize natural, benefit-driven language over keyword-stuffed exact matches
- Distribute keyword components across metadata fields rather than repeating full phrases
- Focus on root word forms and semantic relevance rather than rigid keyword templates
Field distribution beats single-field concentration on iOS
Conventional ASO advice often emphasizes placing your most important keyword in the title. But recent data suggests that spreading a keyword across multiple fields โ title + subtitle + keyword field โ correlates with stronger ranking performance than concentrating it in one place.
In one dataset, keywords appearing in all three fields (title, subtitle, and hidden keyword field) improved rankings in 76% of iterations. Moving a keyword from title-only to title + subtitle placement increased the improvement rate to 80%. By contrast, moving a keyword from subtitle + keyword field to title + keyword field dropped the success rate to 33%.
The implication: Apple's algorithm may interpret field distribution as a stronger relevance signal than single-field prominence. Spreading the keyword indicates that the concept is central to the app's identity across multiple metadata layers.
How to apply this:
- For your top 3-5 keywords, aim for coverage across at least two fields (ideally all three)
- Avoid isolating a keyword in the title alone unless it is a brand or category anchor
- Subtitle should complement the title by extending keyword coverage, not by repeating the same terms
Engagement metrics are ranking inputs, not just conversion signals
Both Apple and Google now incorporate post-install engagement data into ranking calculations. Retention rate, session frequency, session length, and uninstall rate all function as ranking inputs โ not just performance metrics for your internal dashboards.
This shift means that ASO is no longer purely a metadata and creative optimization discipline. An app with perfect keyword targeting and 5-star visual assets will still lose rankings if users install and immediately churn. Conversely, apps with strong Day 1, Day 7, and Day 30 retention receive a ranking boost even if their metadata is not perfectly optimized.
What this requires:
- Onboarding flows that reduce Day 1 churn directly affect organic rankings
- Feature development that drives repeat usage (streaks, notifications, daily content) is now an ASO investment
- Crash rates and ANR (App Not Responding) events on Android directly degrade rankings through the Android Vitals system
Re-indexing happens faster than assumed
The 14-day waiting period for analyzing metadata changes โ a longstanding industry rule of thumb โ does not reflect how quickly ranking shifts actually occur. Data shows that position changes from metadata updates are visible within 1-3 days on iOS and 3 days on Google Play.
This does not mean rankings stabilize in three days. It means the initial algorithmic response to your metadata change is detectable much faster than previously believed. Waiting two weeks to evaluate an iteration means spending 11 days analyzing noise rather than signal.
Revised analysis window:
- iOS: Monitor keyword positions starting 24 hours post-update
- Google Play: Monitor starting 72 hours post-update
- Longer observation (7-14 days) is still useful for detecting secondary effects and competitive shifts, but the primary metadata signal appears within the first week
Some findings contradict each other across individual cases. Short description may outperform title on average for Android, but specific apps in specific categories still see stronger results from title optimization. Exact matches may underperform partial matches in aggregate, but high-intent brand queries still benefit from exact keyword inclusion.
The value is not in declaring new universal rules. The value is in challenging assumptions that were never tested at scale in the first place. If the data contradicts expert intuition, the problem is not the data โ it is the intuition that was built on anecdotes rather than reproducible evidence.
Moving from magic to method
The app store ranking systems remain opaque, and they will never be fully reverse-engineered. But the shift from single-case storytelling to pattern analysis across hundreds of iterations represents a necessary step toward making ASO a more empirical discipline.
Practitioners who continue to rely on folklore โ "always put your keyword in the title," "wait two weeks to analyze," "exact match is required" โ will find themselves optimizing for conditions that no longer exist. The stores are not static. The signals that mattered in 2022 are not weighted the same way in 2026.
The teams that win are the ones treating ASO as an iterative testing discipline rather than a fixed playbook. Track your changes. Measure the results. Adjust when the data contradicts the narrative. That approach works regardless of what the algorithm does next.