Metadata still opens the door
The title, subtitle, and keyword field remain the highest-leverage on-page levers in wiki:aso because they determine which searches an app is even eligible to appear in. Apple's algorithm reads the title first and weights it most aggressively. The 30-character limit forces a choice: brand recognition or keyword relevance. The pattern that consistently outperforms is [Brand] – [Primary Keyword]. For example, "Centr: Workout & Fitness Plan" signals relevance for fitness queries in ways "Centr" alone cannot.
Position within those 30 characters matters. The first keyword carries more weight than the last. If the primary target term is "expense tracker," that phrase should open the title, not close it. Many teams bury their strongest keyword after a long brand name and lose ranking potential they never measured.
Apple's 100-character keyword field is a token list parsed combinatorially with the title and subtitle. The algorithm can surface an app for "fitness tracker for women at home" even when that exact phrase appears nowhere in the metadata, because it combines "fitness" from the title with "tracker,women,home" from the keyword field. The field should contain zero terms already present in the title or subtitle. Repeating a word wastes characters that could index an entirely different query.
Google Play indexes differently
Google Play has no hidden keyword field. The Play Console indexes the app name (50 characters), the short description (80 characters), and the entire long description (up to 4,000 characters). It behaves more like web search than Apple's token logic.
Machine learning analysis of 512 Google Play metadata iterations found that Short Description outperforms Title as a ranking lever. Iterations where the keyword appeared in Short Description after changes showed 84.2% improvement rates—46.5 percentage points above baseline. Iterations where the keyword appeared only in Title showed just 15.8% improvement, 21.9 points below baseline.
Adding the keyword to Short Description, even when it already existed in Full Description, correlated with a 54.5% improvement rate versus a 37.7% baseline. The short description appears to carry higher algorithmic weight than equivalent mentions buried in the long description body. Primary keywords belong in the first sentence.
Running one keyword strategy across both platforms is one of the most expensive mistakes in wiki:app-store-optimization-aso. Apple and Google index different fields with different logic. Each platform requires separate search volume data, separate competitive landscapes, and separate indexing assumptions.
Partial keyword matches work
Industry assumptions hold that without exact keyword matching, algorithms won't surface the app. Machine learning data from App Store iterations shows the opposite. Scenarios with partial keyword coverage (lemmatized forms or semantically close terms) produced better results than exact matches in most ranking segments.
For example, iterations where the keyword was present partially (e.g., "strategy" when the target phrase was "strategy game") showed approximately 60% improvement rates with a median lift of six positions. This held even when the keyword appeared only as a lemma. Exact matches did not consistently outperform partial coverage, except in very high-competition zones.
Keyword splitting across Title and Subtitle produced an 80% improvement rate in the available sample. Moving a keyword from Title alone into Title+Subtitle (adding part of it to the subtitle while keeping part in the title) improved positions in 20 out of 25 cases. Adding a keyword across all three fields—Title, Subtitle, and Keywords—showed 76.3% improvement with a median lift of 30 positions.
The implication is clear: the algorithm lemmatizes terms and recognizes semantic relationships. Writing a keyword exactly as users type it is less important than ensuring related forms appear across wiki:metadata fields.
Behavioral signals decide the top spots
Metadata earns eligibility. Behavioral signals decide ranking within that eligible set. Download velocity, conversion rate from search, and retention are the strongest off-metadata factors shaping visibility once the algorithm understands relevance.
Apple explicitly confirms that downloads, ratings, and reviews influence search ranking. Conversion rate from search—the percentage of users who see the app in results and tap "Get"—feeds directly back into position. Industry data shows that moving from 3% to 5% conversion on a keyword driving 10,000 monthly impressions means 200 additional installs from zero spend. Those 200 installs signal stronger relevance to the algorithm. Ranking improves, impressions increase, and the loop compounds.
The first 170 to 255 characters of the app description carry almost all conversion weight. Fewer than 2% of App Store visitors expand the full description. Writing 4,000 characters for a tiny fraction of the audience means the opening lines must convert the 95% who never scroll. Descriptions that open with proof, urgency, or a clear value statement outperform feature-led copy.
Ratings are not just social proof. Once an app stays above 4.0 stars, rankings improve measurably because users trust the listing more and the algorithm reads that as stronger quality. Review sentiment, response rate, and the distribution of star rating values all feed into the quality layer of the ranking model.
Algorithm updates rarely come with warnings
Rankings shift constantly as competitors update metadata, as install velocity fluctuates, and as Apple and Google run their own experiments. The App Store keyword field resets its indexing when a new app version is pushed. Ranking gains from a well-optimized keyword set can take two to four weeks to stabilize after an update. Teams that change the keyword field with every release and check rankings three days later are measuring noise, not signal.
Daily keyword movement tracking is the fastest way to catch changes before they turn into traffic drops. Monitoring keyword rank, review trends, and conversion signals in one place removes the guesswork and prevents teams from reacting to variance instead of trends.
The real cost of partial optimization
Most teams optimize the 20% they can see: the title, the icon, maybe the screenshots. The other 80% of ranking factors—download velocity, behavioral signals, conversion rates from search—quietly decide outcomes. An app can have perfect metadata and still rank below a competitor with messier copy but stronger engagement numbers.
The shift we are tracking is away from metadata-only ASO toward full-funnel systems that treat metadata as the eligibility layer and behavioral performance as the ranking layer. Teams that align product quality, retention mechanics, and store presence see compounding returns. Teams that optimize metadata in isolation see diminishing returns as competitors catch up.
ASO in 2026 is no longer about keyword research alone. It is about understanding which signals move the algorithm, which can be directly controlled, and which require product and growth changes outside the store listing. The gap between apps that grow organically and apps that stall is increasingly the gap between teams that measure all the inputs and teams that measure only the ones they inherited assumptions about.