Retention replaces install volume as the primary ranking signal
Both Apple and Google have shifted algorithmic weight from raw download counts to user retention and engagement metrics. We are seeing this play out in real-world ranking volatility: apps that drive high install velocity but poor retention no longer hold top positions the way they did in prior years. Google's 2025 rollout of the Engage SDK, Collections on the Android home screen, and the You tab for personalized re-engagement all point to the same strategic priority โ keeping users inside installed apps, not just driving new downloads.
Apple's own transparency data tells the story in numbers. Weekly redownloads now exceed new downloads by more than 2:1 (1.9 billion redownloads versus 839 million new downloads). Platforms see this behavior and optimize for it. The implication for practitioners is clear: acquisition and retention can no longer be optimized in isolation. If users churn quickly after install, the algorithm notices, and wiki:organic-installs suffer downstream.
In-app events on iOS and LiveOps promotional content on Android are no longer optional features for seasonal promotions โ they are ASO levers that simultaneously attract new users and reactivate lapsed ones. Running a well-executed seasonal event can land an app in browse placements and remind dormant users why they installed in the first place. This is one of the few remaining free channels for re-engagement that does not require paid retargeting budgets.
Custom Product Pages enter organic search and redefine intent matching
Until mid-2025, Custom Product Pages on iOS served paid traffic only. Apple's July update introduced keyword linking for CPPs, meaning they now appear in organic search results when users query specific terms tied to those pages. This is a structural change to how ASO works on iOS. Previously, every organic visitor saw the same default product page. Now, different users searching different queries can land on different versions of the same app's listing โ each optimized for the specific intent behind that search.
A fitness app can show running-focused screenshots and messaging to users searching "run tracker," and strength-training visuals to those searching "workout log." Same app, different queries, different pages โ all in organic search. Apple increased the CPP limit from 35 to 70 variations per app, which expands the scope for audience segmentation and hypothesis testing. The open questions are tactical: how Apple handles keyword overlap between CPPs, whether query combinations trigger CPP matches or only single tokens, and how CPPs compete with the default listing when both are indexed for the same term.
Google Play offers a parallel feature in Custom Store Listings, which allow separate pages by country, user segment, or ad campaign. The cap is 50 CSLs at a time. Both platforms are moving toward the same outcome by different paths โ personalization at the product page level, driven by user intent rather than one-size-fits-all metadata.
Blaming the algorithm is usually a way of avoiding the diagnostic work
When downloads drop, the first instinct on many teams is to point at the algorithm. Something changed, the store is behaving differently, and there is nothing to be done. Sometimes that is true โ App Store and Google Play algorithms do change, and when they do, the effects are real. But most of the time, the cause is more specific and more fixable than "the algorithm moved."
The more useful response to a traffic drop is to treat it as a diagnostic exercise. Start with traffic sources: did search fall, or browse, or paid, or collections? Then look at the category more broadly. A competitor may have started bidding harder on a keyword your app relies on. A new app may have launched and begun pulling users who would have found yours. One of your competitors may have switched categories entirely, which can affect how the store treats nearby apps in rankings and browse surfaces. These are all discoverable if you go looking.
Split the data by country, by traffic source, by channel. If wiki:browse-surface-traffic fell, maybe the app lost a placement somewhere and that explains most of the story. If conversion dropped but impressions held steady, that is a different kind of problem and requires a different response. Once that work is done, it is reasonable to check whether anyone else in the category is seeing something similar. But that is the second step, not the first.
Pointing at the algorithm and leaving it there does not change anything. Downloads do not come back because a cause was named. The gap between doing ASO adequately and doing it well is larger than most teams realize, and much of that gap lives in the follow-through after something changes.
Keyword coverage is easy to show and hard to justify
There is a version of keyword reporting that looks strong in a slide deck. A growing list of ranked terms, broader coverage across the category, positions improving over time. It is a satisfying chart to present and relatively straightforward to generate. The problem is that none of it says whether any of it is working in a way that matters. If the keywords have low traffic and conversion through them is weak, the number is just a number. It demonstrates activity more than results.
Relevance determines whether a keyword is worth having. On iOS, character limits enforce discipline by default, but on Google Play there is more room, and that flexibility can make the problem worse. When you can add more, the temptation is to add more, and the result is often a long list of terms that look like coverage but are not doing anything.
A more grounded way to track wiki:keyword-ranking performance is to look at impressions alongside installs and conversion rate. When you change a keyword set, impressions should go up. Conversion should not fall at the same time. If both things are moving in the right direction, the change was probably worth making. If impressions go up and conversion drops, something is off. One thing that gets skipped often: change keywords separately from other things. If you update the keyword set and redesign the screenshots at the same time, you will not be able to work out which one drove whatever change you see.
Creative testing is only useful if you know what you are trying to learn
Running screenshot tests without a clear idea of what you are testing is common, particularly on teams newer to ASO. You update the icon, try a different color, follow a design trend you saw in a competitor's store page, and then see what the numbers do. If they go up, great. If they do not, you try something else. The issue is that without a defined hypothesis, you cannot really learn anything from the result either way. You end up with a history of tests but no accumulated understanding of what your users respond to or why.
Before running a test, it is worth knowing what specifically you are trying to find out, what a good result looks like, and what you would do with a negative result. Testing whether showing a new feature in the first few screenshots improves conversion is a testable idea with a clear output. Testing whether the screenshots look nicer is not.
It is also reasonable to be skeptical of what the platform tells you when a test concludes. Test results from App Store and Play Store are not perfectly reliable, and the confidence levels the platforms show are there for a reason. A result that looks positive in a 50/50 test can behave differently once the change goes live to everyone. Checking performance with two weeks of real data after rollout gives a more honest picture, and it is worth building that into the process rather than treating the platform's conclusion as final.
Apple Search Ads is a keyword discovery tool, not just an acquisition channel
Most teams treat Apple Search Ads and ASO as two separate budgets with different owners. In practice, both tools work on the same page in the same store, and when there is no connection between them, both lose effectiveness. Apple Search Ads is one of the few places where you can see a direct link between a specific query and user behavior. Not using that data to optimize metadata means leaving ready insights on the table.
A practical scenario: you want to add a cluster like "route planner running" to your metadata but are not confident in the potential. Launch a campaign with exact match, look at conversion rate after a week. If conversion is above your account average, the keywords go into metadata. If it is below, either the page is not ready or the audience is not yours. This is faster and more precise than waiting for organic effect over two to four weeks.
Apple's March 2026 expansion of additional ad slots in search results across all markets changes how paid and organic traffic interact. Previously, each query had one top ad slot. Now there are multiple positions. This raises the risk of cannibalization โ paid budget grows, paid installs grow, but total result barely changes because the ad is simply replacing organic traffic rather than adding new users. The simple rule: if the app already ranks organically in the top 1โ3 for a query, an aggressive bid on that same keyword requires explicit justification โ brand defense, blocking a competitor, testing a CPP. If the organic position is below top 10, paid coverage likely delivers real incremental gain.
Localization is the most underutilized growth lever in mobile marketing
Only 4% of the world speaks English as a first language, yet the majority of app listings are English-only. Localizing metadata into the top 10 app store languages can increase downloads by 200โ300% in those markets. Effective localization goes beyond simple translation. You need to research local keywords โ the direct translation of an English keyword often has low search volume. You need to adapt screenshots by translating caption text, adjusting imagery for cultural relevance, and considering right-to-left layouts for Arabic and Hebrew. You need to localize descriptions by writing for local users, referencing region-specific use cases and social proof.
The biggest barrier to localization is time and cost. AI-powered translation tools can reduce the effort from weeks to hours by handling keyword research, cultural adaptation, and character-limit compliance for 40+ languages automatically. Teams that translate text but leave screenshots in English lose conversion in markets with low English proficiency. In the App Store, each country can have multiple locales that are indexed in parallel. In the US alone, you can use English, Spanish (Mexico), Portuguese (Brazil), Russian, and others โ this substantially expands keyword coverage without additional budget. Combinations are formed only within one locale, so keywords need to be distributed so that each locale delivers new combinations.
AI enters the stores but does not replace strategic judgment
At WWDC 2025, Apple announced AI-generated App Store Tags โ labels created from app metadata including screenshots and description. These tags affect browse placements by helping users find similar apps. Google uses Guided Search to organize results by intent and Gemini in Play Console for translations. A separate trend worth tracking: AI interfaces like ChatGPT are influencing app choices before users even open the store. People research options through AI assistants and then go to the store afterward. This creates an additional layer where traditional metadata may not be the primary point of contact.
The balance here matters. AI is strong at pattern recognition, scaling, and speed. Strategic decisions remain with humans. The quality of hypotheses determines the value of tests โ tools only accelerate execution, they do not replace understanding of the product and audience. Five years of A/B testing at Super Unlimited VPN produced a consistent finding: modern redesigns of their App Store screenshots lost 80% of the time. Users prefer what they were used to seeing. The instinct to follow contemporary design trends ran into the data, and the data won. This is the lesson: better design and better-performing design are not the same thing, and only testing separates the two.