Decoding how our AI fears are coming true

We decode how the rapid integration of AI into workplaces and daily life is displacing human jobs, amplifying bias, compromising privacy, and eroding critical thinking.

author-image
Shamita Islur
New Update
AI fears

Remember those old futuristic predictions about 2025? Cars that fly, robot teachers, and technological gadgets that would make life easier? Well, here we are in 2025, and half of this is the reality of today. While we don't have cars that can fly, AI teachers and technological gadgets have brought about a disturbing reality, quite different from the way I imagined.

Technology was made with the idea of making our lives more convenient. However, we have paid a hefty price for this advancement; human jobs. 

For all the recent debates on the ethics of 70-hour and 90-hour work weeks, it highlights their impact on mental health. All the critics have argued that employees deserve better pay and reasonable hours.

Yet instead of addressing these legitimate concerns, corporate executives have found a faster solution: replace humans with AI systems that never complain, never sleep and never ask for raises.

Take Zomato, for instance. The food delivery platform recently fired nearly 600 customer support executives, many of whom were hired just last year under its Zomato Associate Accelerator Program (ZAAP). Former employees have said that the firing was unexpected and the treatment they underwent was unfair. They are yet to understand why they were fired in the first place. 

What makes this particularly troubling is the timing. The layoffs came within a month after Zomato launched Nugget, an AI-powered customer support platform. According to the company's website, the platform's AI agents resolve up to 80% of customer queries, enhance compliance by 20%, and reduce resolution time by another 20%. These employees weren't let go because they were underperforming; they were let go because AI doesn't demand fair treatment.

Slashing jobs to “streamline efficiency”

Zomato is not the only company in this trend. Across the tech industry, AI-driven layoffs have become the norm.

  • Meta cut approximately 3,600 employees (5% of its global workforce) in February 2025, targeting so-called "low performers." Yet many affected workers had received positive performance reviews just months earlier. CEO Mark Zuckerberg's statement was that Meta wanted to "raise the bar" on talent and accelerate hiring in AI and machine learning roles immediately after the cuts. The message couldn't be clearer. According to reports, layoffs began on Monday; AI hiring started Tuesday.

  • Workday laid off 1,750 employees (8.5% of its workforce) while simultaneously increasing investments in AI.

  • Salesforce cut 1,000 jobs, pivoting toward AI-driven solutions.

  • Dell laid off 12,500 employees last year as it shifted toward AI-powered infrastructure.

  • Intel eliminated over 15,000 positions (15% of its workforce) as it pivoted toward AI-driven computing.

  • Electronic Arts (EA) laid off 775 employees (6% of its workforce) to prioritize AI and machine learning in game development.

  • Amazon plans to lay off around 14,000 managerial positions by early 2025, amounting to a 13% reduction in its global management workforce; a move expected to save the company between Rs 210 crore and Rs 360 crore annually.

These aren't isolated incidents. They represent a shift in corporate priorities. According to the World Economic Forum's 2025 Future of Jobs Report, 41% of employers plan to downsize their workforce due to artificial intelligence. The economic implications are staggering. 

Atomberg founder Arindam Paul ominously predicted in a report that almost 40-50 % of white-collar jobs that exist today might cease to exist, which could potentially end the middle class.

While corporations celebrate cost-cutting measures and increased efficiency, what happens to our consumer economy when millions lose their income and purchasing power?

"We still need human creativity"

Industry leaders love to reassure us that certain human qualities will always remain essential. Till Leopold, a lead author of the World Economic Forum study, emphasised in a report that "human skills" like creativity, collaboration, and resilience will "become newly important."

But even creativity is now in trouble.

The recent Studio Ghibli AI trend proves my point. OpenAI's tool, ChatGPT, recently introduced its most advanced image generator into GPT‑4o. It allows users to turn their photos into images mimicking the distinct artistic styles. Users used the feature to turn their images into Studio Ghibli's acclaimed animations. The feature went viral, with thousands eagerly uploading their photos to see themselves "Ghiblified."

What most don’t realise is that they are participating in a privacy nightmare. The Ghibli Effect isn't just an AI copyright controversy, it is one of the ways for AI companies like OpenAI to gather thousands of personal images.

ChatGPT itself says that OpenAI trains its models using a multi-step process. This includes:

  • Data Collection: Large datasets are gathered from publicly available sources on the internet, like websites, books, articles, and code, for a broad understanding of language and knowledge.

  • Pretraining: The model is initially trained on this dataset using a technique called unsupervised learning, where it predicts the next word in a sentence, helping it learn grammar, facts, reasoning patterns and coding skills.

  • Reinforcement Learning from Human Feedback (RLHF): Human reviewers fine-tune the model by ranking or editing its responses. Following this, the model is then trained using reinforcement learning to improve the quality, usefulness, and safety of its answers.

  • Safety and Alignment: Additional techniques help align the model’s responses with human values and prevent harmful outputs, including red-teaming (stress-testing the model), rule-based filtering and ongoing human oversight.

By uploading their faces and personal photos to ChatGPT, users have voluntarily provided OpenAI with fresh training data, including family photos, intimate pictures, and images that likely weren't previously available on social media. While Studio Ghibli has since asked OpenAI to stop using its artistic style, the damage is already done, both to Ghibli's artistic legacy and to users' privacy.

Here's the particularly concerning part. When OpenAI scrapes personal images from the internet, it must comply with privacy regulations like the EU's GDPR, which imposes limitations based on legitimate interest balancing tests. But when users voluntarily upload these images, they consent to the company processing them, giving it more freedom to use this data to train its models.

Accuracy problems and bias concerns

I have highlighted this issue quite often in many of my AI-centric pieces. As a tech reporter, I have used AI models and have noticed their inaccuracy on several occasions. I could give these models certain instructions, and they tend to forget the task I asked them to perform. Despite the rush to replace humans with AI systems, these technologies remain flawed. 

For example, models like ChatGPT’s GPT-4o and Anthropic’s Claude have inaccurately asserted that strawberry has two r’s instead of three. Apart from accuracy issues, AI has been recognised for being biased based on gender, race, culture and language. Amazon was forced to abandon an AI hiring tool in 2018 after discovering it systematically discriminated against women. The system had been trained on resumes submitted over 10 years, most of which came from men. This led the algorithm to ignore resumes containing terms like "women's" or graduates of women's colleges. Despite years of supposed improvements, similar bias issues persist in hiring algorithms across industries.

Similarly, a study found popular language models consistently described professionals like surgeons, CEOs, or engineers to male characters. When asked about nurses, elementary teachers or housekeepers, it generated female characters.

Moreover, Google's search algorithms have been criticised for reinforcing stereotypes, with studies showing that image searches for "professional hairstyles" vs "unprofessional hairstyles" show distinct racial biases while the COMPAS algorithm, used in US courts to predict recidivism risk, has falsely flag Black defendants as future criminals at nearly twice the rate as white defendants. 

Yet, in a recent proposal, Elon Musk suggested deploying AI systems to evaluate the performance of 2.3 million U.S. federal employees and determine their continued employment.

The biggest threat: Our diminishing ability to think

Perhaps the most alarming consequence is how AI is eroding our capacity for independent thought. Increasingly, people turn to AI for everything from writing emails, solving problems to creating art. Each time we outsource thinking, we exercise our mental muscles less, potentially taking away more of our critical reasoning abilities.

The situation reminds me of Studio Ghibli's ‘Spirited Away.’ In the film, the character No Face consumes others, absorbing their traits and growing more powerful. Like No Face, AI systems consume human knowledge, creativity and labour, growing more capable while we become increasingly dependent and diminished.

No Face offered gold to tempt people, and AI offers convenience and efficiency. But as the film shows, this gold ultimately proves worthless. Much like the efficiencies of AI that may come at the cost of long-term human capability.

The question isn't whether AI should exist; I believe we are long past that. The real question is how we integrate these technologies without sacrificing human livelihood, privacy and autonomy.

We need regulations that protect workers from AI-justified layoffs. We need privacy laws that prevent companies from collecting our personal data through viral trends. And we need educational approaches that strengthen critical thinking.

Otherwise, we risk living in a world where huge corporations profit from scaled efficiencies while the rest of us compete with algorithms for scarce opportunities.

AI laws and regulations Studio Ghibli AI layoffs Google Layoffs Amazon Layoff meta layoffs Artificial Intelligence