Store Policies Tested by New AI Capabilities
Apple's wiki:app-review-guidelines face renewed pressure as AI-powered apps test long-standing content and code execution rules. In January, the company privately threatened to remove Grok from the App Store after the chatbot generated nonconsensual sexualized deepfakes โ images that violated multiple content policies but required sustained pressure from U.S. senators and internal review escalations before xAI submitted acceptable moderation controls. Apple rejected the first fix as insufficient, demanded further revisions, and eventually approved a revised submission only after multiple rounds of back-and-forth.
The pattern extends beyond Grok. A report from the Tech Transparency Project identified 38 "nudify" apps across Apple and Google's stores that collectively amassed 483 million downloads and $122 million in revenue. Some carried "E for Everyone" age ratings. Despite official policies banning apps that create nonconsensual sexual content, both platforms were actively surfacing these tools through autocomplete search suggestions and promoted results. When asked for comment, Apple removed 15 apps flagged in the report and blocked several search terms โ yet the same apps had been reported months earlier, only to reappear under different developer accounts.
Data Harvesting Disguised as Rewards
Data collection abuses continue to slip through wiki:app-store-connect review. Freecash, a rewards app that peaked at #2 on the U.S. App Store and #7 on Google Play, collected sensitive user data including race, religion, sexual orientation, health information, and biometrics while marketing itself as a way to "make money scrolling TikTok." In reality, the app functioned as a data broker connecting game developers with high-value users willing to install and spend on mobile games.
The app's trajectory raises questions about review efficacy. After an initial removal in June 2024, Freecash reentered the App Store months later under a Cyprus-based developer account (256 Rewards Ltd) that was subsequently rebranded. The app maintained a 4.7-star rating โ high enough to avoid automatic flagging โ and reached over 5.5 million downloads in January 2026 before Apple removed it following a TechCrunch inquiry. The developer's use of multiple accounts to circumvent prior takedowns directly violates Section 3.1.2(a) of the App Store Review Guidelines, which prohibits circumventing bans.
Code Execution Rules Clash With Next-Gen Development Tools
Vibe coding platforms are colliding with Section 2.5.2 of the App Review Guidelines, which prohibits apps from downloading or executing code that changes functionality. Apple removed the vibe coding app "Anything" twice โ first in March, then again days after reinstatement โ citing violations of rules designed to prevent malicious apps from altering behavior post-review.
Developers behind Anything argue the guidelines are "outdated" and exclusionary to a new generation of creators building AI-powered coding platforms. The app allowed users to generate apps via text prompts, preview them on-device, and submit them to the App Store through their own developer accounts. Apple rejected four separate technical implementations before removing the app entirely. The developers have since shifted to cloud-based app generation and desktop companion tools to bypass on-device preview restrictions.
This friction comes as The Information reports a surge in new App Store submissions driven by the explosion of vibe coding tools. The outcome of this enforcement pattern will shape whether AI-assisted app creation remains viable within Apple's ecosystem โ a question that will likely surface at WWDC26 if the company chooses to address it.
Regulatory Dynamics Shift in India and U.S. Courts
India's government abandoned its sixth attempt in two years to mandate pre-installation of state-owned apps on smartphones, including iPhones. The most recent proposal would have required Sanchar Saathi, a tracking app framed as a security tool, to be installed as undeletable software via iOS update. Apple refused to comply โ consistent with its historical stance on government-mandated software โ and the Indian IT ministry ultimately reversed course after consultations with the electronics industry.
On the Android side, independent app store Aptoide filed a federal antitrust lawsuit against Google in the U.S. District Court for the Northern District of California, alleging that OEM lock-in agreements, developer exclusivity deals, and added friction for alternative stores continue to harm competition. The complaint builds on findings from the Epic Games case and argues that post-enforcement changes have not gone far enough to open Android wiki:app-distribution.
What This Means for Practitioners
The convergence of these cases reveals several operational realities:
- AI moderation remains reactive: Both Apple and Google rely heavily on external reporting and media coverage to identify policy violations at scale. Proactive detection systems failed to catch nudify apps, data harvesting schemes, and coordinated wiki:chart-manipulation tactics until external parties flagged them.
- Developer account switching is a known exploit: Apps banned under one account routinely reappear under new developer identities. This tactic circumvents enforcement but requires deliberate platform action to address at scale.
- Code execution policies lag behind development paradigms: The rise of vibe coding and on-device AI tools creates fundamental tension with rules written to prevent malware. Apple has not yet articulated a path forward that accommodates legitimate use cases.
- High ratings do not signal compliance: Freecash maintained a 4.7-star rating while violating multiple policies. Review manipulation remains a viable strategy for bad actors seeking to avoid automated flagging.