/socialsamosa/media/media_files/2026/01/29/egfb-2026-01-29-17-42-58.png)
Image credit: Daniel Garcia / Pinterest
In the 2002 film Minority Report, Tom Cruise's character walks through a futuristic mall where retinal scanners identify him instantly, triggering personalised ads that call him by name and reference his purchase history. The scene was meant to unsettle audiences.
Two decades later, the Indian web series Asur updates this nightmare for the smartphone era: a serial killer uses AI and leaked social media data to predict his victims' movements. One character describes social networking apps as modern-day alcohol that creates a false sense of freedom while ‘internally corrupting your soul’ through data extraction.
The gap between fiction and reality might soon collapse. Two decades apart, we fear the same. Your phone's ID, your location history, your period tracking app, and even the metadata from your encrypted messages do the job more efficiently.
AI adoption
AI has fundamentally changed what data privacy means. Data is no longer stored in passive archives. It is the raw material that trains the algorithms predicting your next purchase, your creditworthiness, your health risks, and increasingly, your behaviour.
According to recent research, 90% of global organisations have expanded their privacy programs specifically because of AI adoption. By 2026, 93% plan further investment increases. Among large enterprises, 38% now spend more than $5 million annually on privacy, up from just 14% two years ago.
As one report notes, privacy is no longer a compliance checkbox; it is a competitive advantage. In a market where AI is embedded in healthcare diagnostics, financial risk models, and even search engines, the ability to demonstrate responsible data handling separates market leaders from those left behind.
India's privacy paradox
India offers one of the starkest examples of the gap between privacy law and privacy practice. In 2023, the country passed the Digital Personal Data Protection (DPDP) Act, a comprehensive framework designed to protect the digital identities of more than a billion citizens. The law centers on the ‘Data Principal’ - the individual - and mandates that consent must be given, specific, informed, and unambiguous. It bans fine print agreements and assumed consent.
In practice, its implementation has been uneven.
An EY survey of over 150 professionals across Indian industries found that roughly 70% were unfamiliar with the specifics of the DPDP Act and its 2025 rules. Readiness varies by sector: 50% of consumer and retail companies have begun compliance efforts, compared to just 9.9% of healthcare and life sciences organisations. Overall, 81% of Indian organisations have not updated their privacy policies to align with the new law, and 83% have not started comprehensive implementation.
The healthcare sector's 9.9% readiness rate is particularly troubling given the sensitivity of medical data. The readiness deficit suggests that while India has world-class privacy legislation, the underlying infrastructure remains fragmented and vulnerable.
Then came Sanchaar Saathi.
Weeks after the implementation of the DPDP Act in November 2025, the Indian government issued a confidential directive requiring smartphone manufacturers to preload their devices with Sanchaar Saathi, a state-owned cybersecurity app. The official purpose was to combat fraud, track stolen devices, and prevent spoofed IMEI numbers. But the app's technical requirements told a different story.
The app requires system-level integration, meaning users cannot easily disable or remove it. Its privacy policy requests access to call logs, SMS logs, telephony functions, stored media, and the device camera. It might have given the state unfettered access to India's millions of smartphones.
The contradiction was stark: a country that legislates for informed consent and data minimisation is simultaneously mandating the installation of a non-removable state surveillance interface.
Apple refused to comply, stating that preloading the app would violate its core privacy design principles and introduce critical security vulnerabilities into iOS. Complying with the Indian mandate would require Apple to create a backdoor for state-integrated access, something the company has refused globally.
Earlier data privacy efforts
While end-to-end encryption has become the standard for messaging apps, WhatsApp, Signal, and iMessage all use it to protect message content from interception. But encryption protects only what you say, not the ‘digital DNA’ of communication: who you're talking to, when, for how long, from where, and on what device.
This is metadata, and it is often more valuable than the messages themselves.
BlackBerry understood this. Unlike consumer apps, BlackBerry's enterprise systems, such as SecuSUITE, encrypted both message content and metadata. The company's architecture was designed for ‘sovereign control,’ meaning data, infrastructure, and user identity remained governed solely by the institution using it.
WhatsApp might exemplify ‘metadata mining.’ While Meta cannot read users' messages, it collects IP addresses, location data, phone models, usage frequency, and contact lists. This data builds a ‘social graph’ that is monetised through advertising.
Another vulnerability is the ‘backup trap.’ By default, WhatsApp backs up chat histories to Google Drive or iCloud. Historically, these backups were stored without the user's end-to-end encryption key for law enforcement. The app introduced encrypted backups, but the feature is not enabled by default, and few users know how to activate it.
Apple's iMessage encrypts backups when iCloud end-to-end encryption is enabled, but again, this is not the default.
The pattern is clear: privacy features exist, but they are buried in settings most users might never see.
Health data as commodity
Health data represents the most intimate category of personal information. Yet it has also become one of the most heavily monetised. The rise of ‘FemTech’, apps that track menstrual cycles, ovulation, and pregnancy, has led to the collection of detailed biological data from millions of women. The exploitation of this data has resulted in some of the most significant privacy lawsuits of the decade.
Flo Health, supposed to be private, was alleged to have shared intimate details about pregnancy goals and sexual activity with Meta and Google without user consent.
Another instance of health data piracy is when, in 2024, Dropbox detected unauthorised access to its Dropbox Sign production environment.
OpenAI's entry into healthcare with ‘ChatGPT Health’ earlier this month is another concerning development for healthtech. The service allows users to upload medical records and integrate wellness data for personalised guidance. However, the company claims health data is siloed, encrypted, and not used for training models.
In the year 2026, data privacy is no longer about preventing identity theft or avoiding spam. It is about resisting a state where every digital footprint, biological rhythm, and private association is harvested to train opaque models that predict and manipulate human choice.
The contradictions are everywhere. India passes a world-class privacy law, then mandates the installation of a state surveillance app.
Messaging apps encrypt your words but monetise your metadata.
Health apps promise privacy while selling your fertility data.
Cloud providers secure your files until a single service account is compromised.
The infrastructure of surveillance is now embedded in the infrastructure of daily life. The advertisements in the mall know your name. The app on your phone knows when you're ovulating. The AI knows what you'll buy before you do.
That is why 2026 is the year data privacy became non-negotiable. Not because the technology is new, but because the consequences of ignoring it might be impossible to deny.
/socialsamosa/media/agency_attachments/PrjL49L3c0mVA7YcMDHB.png)
/socialsamosa/media/media_files/2025/12/31/10-2-2025-12-31-12-43-35.jpg)
Follow Us