The urgent need for diversity checks in AI

We decode how AI systems, despite their promise of fairness, often perpetuate bias due to skewed data, user demographics, and a lack of diversity in development teams, making diversity checks a critical necessity for building inclusive and ethical AI.

author-image
Shamita Islur
New Update
diversity checks in AI

In Alex Garland's "Ex Machina," a programmer creates Ava, an AI designed to think and feel like a human. What begins as a technological marvel quickly transforms into a psychological thriller when Ava manipulates her creators, escapes confinement, and leaves a trail of destruction in her wake. The film's haunting message resonates today: machines designed to integrate seamlessly with humanity can become something altogether different when their creators embed their own flaws, biases, and blind spots into the code. Today, as artificial intelligence (AI) increasingly shapes decisions across industries, from marketing campaigns to healthcare diagnoses, we face a similar paradox. AI systems promise objectivity and efficiency, yet they consistently reflect and amplify the very biases they were supposed to eliminate.

As per recent data, ChatGPT's user base is skewed toward male users, who account for 66% of the user base, compared to 34% female users. The United States accounts for 19.01% of all ChatGPT users, followed by India at 7.86%, Brazil at 5.05%, Canada at 3.57%, and the United Kingdom at 3.48%. 

With 800 million weekly active users, the AI chatbot’s use varies significantly by age group, with 58% of adults under 30 reporting usage compared to much lower rates among older populations. In India, nearly half of ChatGPT users in India are under the age of 24, according to OpenAI.

This demographic skew has profound implications for how AI systems learn and evolve. When predominantly young, male, and Western users drive AI interactions, the technology develops a worldview that reflects their experiences, values, and linguistic patterns. The result is AI systems that are fluent in the concerns of college-educated millennials from developed countries but struggle to understand the needs of elderly users, rural communities, or speakers of less-represented languages. In fact, recent research indicates that when prompted to generate names for various professions, ChatGPT consistently assigned traditionally male names to scientific and technical roles while relegating women to artistic or caregiving positions. This pattern reveals how AI systems internalise and reproduce societal stereotypes.

As these systems become more powerful, their biases don't just reflect existing inequalities; they actively perpetuate and scale them. It emerges from multiple interconnected sources. For instance, facial recognition technologies have shown higher error rates for individuals with darker skin tones due to underrepresentation in training data.

This discrimination affects the marketing and advertising industries as well. By relying on proxy variables tied to protected characteristics, platforms may exclude vulnerable groups. For example, a fitness ad targeting affluent zip codes while overlooking communities with higher obesity rates but limited healthcare access.

AI bias in healthcare can be life-threatening, as systems trained on non-representative data may overlook symptoms or risk factors in underrepresented populations. For instance, when a diagnostic AI learns primarily from data collected in Western hospitals, it may fail to recognise how conditions occur in patients from different ethnic backgrounds or geographic regions.

Recent research indicates that AI bias acceleration is creating a ‘discrimination feedback loop’ as more content becomes AI-generated, leading to bias toward AI content itself. This creates a compounding problem where biased AI-generated content becomes training data for future AI systems.

Yet the challenges extend beyond individual companies to entire industries built on biased foundations. AI has so much potential to support creativity, but only if we are mindful about how it's built and used. 

The corporate push against AI oversight

The biggest problem is that tech giants are pushing back on oversight. While UNESCO and other international bodies advocate for ethical AI frameworks, the reality on the ground tells a different story. The US and the UK have refused to sign a declaration on ‘inclusive and sustainable’ AI at a Paris summit, blowing the hopes for an approach to developing and regulating the technology. 

JD Vance took to the stage at the Grand Palais to criticise Europe's "excessive regulation" of technology and warn against cooperating with China. Vance's hard-hitting speech indicated dissatisfaction with the global approach to regulating and developing the technology. This rhetoric positions diversity and inclusion requirements as obstacles to progress rather than essential components of ethical development.

The corporate response to regulatory oversight reveals the depth of resistance to diversity checks in AI. Recent reports indicate that major tech companies have been actively lobbying for a decade-long moratorium on state-level AI regulations. America's tech giants want a 10-year ban on state regulations of artificial intelligence models. This effort by the likes of Amazon and Google has divided the AI industry and the Republican Party.

"The tension with regulation of any kind is that it tends to retard progress," Schmidt said in the report. "So the way we tend to focus on standards is to let the industry figure out what the right standards are, and that will be driven by our customers." This self-regulation approach has failed to address bias and discrimination in technology, raising questions about whether industry-led initiatives can meaningfully address AI bias.

The European Union has taken a different approach, implementing comprehensive AI regulations despite industry resistance. The EU is one of the first regions in the world to set clear rules for how AI systems should work. The rules came into effect recently, kicking off a voluntary compliance period for general-purpose AI models. However, even these efforts face pushback from major companies. Meta, the parent company of Facebook and Instagram, is refusing to sign. It argues that the code is too vague and goes beyond what the AI Act will require.

Building inclusive AI systems

Despite corporate resistance and regulatory challenges, practical solutions for addressing AI bias are emerging across industries. These approaches range from technical interventions to structural reforms in how AI systems are developed and deployed.

There are a few initiatives that focus on data diversity and representative training. Projects like Karya, working to capture local languages across India, aim to enable supplemental income opportunities for people in low-income and marginalised communities by connecting them to AI-enabled digital work.

Google's Project Euphonia is aimed at enhancing speech recognition for people with speech impairments, with an eye toward increasing their ability to communicate as well as their independence. 

In marketing, System1's Test Your Ad shows that when AI is used thoughtfully, it enhances, rather than detracts from, brand impact. Dove's The Code campaign is a great example. It used AI to challenge artificial beauty standards while staying true to the brand's real beauty mission.

Transparency and compensation are also crucial components. AI models are built with representative data. For models to tell the truth about people, they need to be built with data collected from and about these same people. This extends to fair compensation for data contributors, particularly those from underrepresented communities whose contributions have historically been exploited rather than compensated.

The legal landscape is also evolving to address AI bias through litigation and regulatory action. Publishers and artists have filed numerous lawsuits against AI companies for using copyrighted material without consent, establishing precedents that could extend to other forms of unauthorised data use. These cases highlight the need for clear frameworks governing data collection, consent, and compensation in AI development.

Professional organisations are developing certification programs and ethical guidelines specifically for AI practitioners. These initiatives mirror diversity and inclusion programs in other industries, providing concrete tools and metrics for measuring progress toward more inclusive AI systems.

UNESCO's recommendation emphasises that Member States should promote AI ethics research by engaging international organisations and research institutions, as well as transnational corporations, including research into the applicability of specific ethical frameworks in specific cultures and contexts. However, translating these principles into operational practices requires commitment from both industry leaders and regulatory bodies.

The future of AI bias mitigation may ultimately depend on whether society treats diversity as a luxury feature or a fundamental requirement for systems that increasingly govern human opportunities. As AI systems become more powerful and pervasive, the choice between inclusive development and biased deployment becomes not just a technical decision but a moral one. The animals in Orwell's farm learned too late that equality requires vigilance. In the age of AI, we still have time to choose a different path.

Diversity AI in advertising ai adoption AI bias