Platform Policy Enforcement Intensifies Across iOS and Android
The spring of 2026 marks a decisive shift in how both major app platforms enforce their existing guidelines. Google Play is rolling out new privacy-focused requirements and developer tools, while Apple has accelerated removals of apps that violate long-standing security and quality standards. For practitioners, the message is clear: compliance gaps that previously slipped through review are now being actively detected and enforced.
Google Play's Privacy-First Contact and Location Policies
Google Play is introducing mandatory privacy controls for contact and location access, effective later this year. The new requirements center on two system-level features designed to give users granular control over what data they share:
Contact Picker requirement: Apps requesting contact information for features like sharing, invites, or one-time lookups must use the Android Contact Picker or similar privacy-focused alternatives like Sharesheet. The READ_CONTACTS permission will be reserved exclusively for apps that require full, ongoing access to a user's contact list to function. Developers using contacts for temporary actions must remove READ_CONTACTS entirely if targeting Android 17 and above.
Location button requirement: Apps needing precise location for discrete, temporary actions โ finding a store, tagging a photo โ must implement the new streamlined location button. This replaces complex permission dialogs with a single tap, eliminating persistent location access unless the app's core functionality genuinely requires it.
Both policies include a compliance path for apps with legitimate needs: developers can submit a Play Developer Declaration through the Play Console (available before October) to justify why full access is necessary. This creates a clear divide between temporary-use cases and persistent-access requirements.
Developer Tools to Prevent Rejection Before Submission
Google is deploying two new enforcement mechanisms to help developers catch policy violations before they reach human review:
- Play policy insights in Android Studio (launching by October): Proactively identifies whether your app should use the new contact or location features and provides step-by-step implementation guidance
- Pre-review checks in Play Console (available October 27): Flags potential contacts or location permissions policy issues before you submit for review
Account Transfer Security to Combat Fraud
Starting May 27, all Play Console account ownership transfers must use the new official account transfer feature. Unofficial transfers โ sharing login credentials, buying accounts on third-party marketplaces โ are now explicitly prohibited. Every transfer includes a mandatory 7-day security cool-down period, giving teams time to detect and cancel unauthorized takeover attempts.
Apple's Escalating Enforcement: Fraudulent Apps and AI-Generated Code
While Google focuses on privacy tooling, Apple is addressing two distinct enforcement failures: scam apps that bypass wiki:app-review-guidelines through deceptive tactics, and AI-generated applications that violate core security principles.
Scam Apps Exposed After Months of Data Harvesting
Apple removed the Freecash app in mid-April after the app spent months harvesting user data and engaging in misleading marketing. Freecash had reached #2 in the U.S. App Store charts in January 2026 after heavy TikTok promotion that promised users up to $35 per hour for watching content. In reality, the app collected extensive personal data โ race, religion, health, biometrics โ and pushed users to install mobile games where the real monetization occurred through in-app purchases and paid ads.
The app was downloaded 5.5 million times across iOS and Android in January alone. Evidence suggests the developers used bots, fake ratings, and account hijacking to bypass review: the app had been banned in 2024, then an existing App Store app was renamed "Freecash" and updated with the same functionality. Apple removed the app only after direct media inquiry, despite prior reporting on its deceptive practices.
This incident highlights a persistent gap in App Store review: scam apps that acquire existing listings or use sophisticated manipulation tactics can evade initial detection and remain live for months. For legitimate developers, this underscores the importance of wiki:review-management and maintaining accurate metadata โ inaccurate or misleading information is now drawing heavier scrutiny.
The AI-Generated App Crackdown: What's Actually Being Enforced
Apple's enforcement against AI-generated apps has been widely mischaracterized as a ban on "vibe coding" or AI-assisted development. The reality is more technical and more narrow.
Between March and April 2026, Apple blocked updates for Replit and Vibecode, removed the "Anything" app entirely, and triggered a lawsuit from Ex-Human (developer of Botify and Photify AI) claiming Apple is withholding over $500,000 in revenue. The common thread: apps that generate and execute code at runtime that was not present during Apple's review.
Apple's position is explicit: AI-assisted development is permitted. Apple itself integrates OpenAI and Anthropic into Xcode. What violates guidelines is dynamic code execution โ downloading, generating, or running code after an app has passed review. This creates what Apple calls an "audit gap": functionality that exists in production but was never examined during the review process.
The Four Guidelines AI-Generated Apps Violate
Most AI-built app rejections trace to four specific violations:
- Guideline 2.5.2 (Dynamic Code Execution): Apps must be self-contained in their bundles. They cannot download, install, or execute code that introduces or changes functionality after review.
- Guideline 4.2 (Minimum Functionality): Apps must provide sufficient native functionality. Thin wrappers around web views displaying remotely generated content get rejected.
- Guideline 4.3 (Spam): Apps created from commercialized templates or generation services are rejected unless submitted by the content provider. Apple's automated systems detect duplicate code structures across submissions.
- Section 3.3.1(B) (Interpreted Code Limits): Downloaded interpreted code cannot change the primary purpose of the application or provide features inconsistent with the app's advertised purpose.
Security Vulnerabilities Drive the Enforcement
The policy enforcement is not arbitrary. Research shows that 45% of AI-generated code contains security flaws โ 2.74x more vulnerabilities than human-written code. Common issues include exposed API keys, missing input validation, authentication bypass, unencrypted data storage, and hardcoded credentials. An audit of apps built with one popular AI platform found 170+ apps with completely exposed databases and no Row Level Security, including one app that exposed 18,697 user records.
This is why human code review before submission is no longer optional. Automated generation without security audit creates liability for both developers and users.
Compliance Path for AI-Built Apps
Developers building with AI tools can still ship to the App Store by following these core principles:
- Compile to native binaries: Use tools that produce native iOS code (IPA files), not web wrappers displaying remotely generated content
- No runtime code generation: All functionality must be present in the submitted binary. Use server-driven configuration (feature flags, remote config) rather than code injection for updates
- Security audit all generated code: Run static analysis tools, audit for common vulnerabilities, test on real devices before submission
- Meaningful differentiation: Generic AI output triggers spam detection under Guideline 4.3. Invest in unique features, design polish, and substantive functionality
- Complete metadata and demo access: Provide demo credentials in App Review Notes, ensure all visual assets and descriptions are complete and professional
What This Means for Practitioners
Both platform updates demand proactive compliance work:
For Android developers: Audit your contact and location permission usage now. If your app uses these permissions for temporary actions, plan migration to the new pickers before October. Use the Android Studio policy insights tool when it launches to catch violations before submission.
For iOS developers: If you use AI code generation, ensure your app compiles to a self-contained binary with no runtime code execution. Run security audits on all AI-generated code. If your app's core functionality requires dynamic code generation, consider shipping as a Progressive Web App instead.
For both platforms: Incomplete or misleading metadata optimization now triggers heavier scrutiny. Ensure your app listing accurately represents functionality, includes demo credentials where required, and meets all technical requirements before submission. Review times have increased significantly โ plan submission timelines accordingly.
The message from both Apple and Google is consistent: automated detection and pre-submission tooling are replacing human judgment as the first line of enforcement. Apps that would have slipped through review six months ago will now be flagged before they reach a reviewer. Compliance is shifting left in the development cycle โ and developers who adapt their workflows will avoid the rejection and removal cycles that are becoming increasingly costly.