Advertisment

Google's Gemini makes contractors assess AI responses outside their expertise

The updated guidelines, according to the report, have raised concerns that Gemini could be more prone to errors on sensitive topics, potentially undermining users' trust in the chatbot's capabilities.

author-image
Social Samosa
New Update
Google's Gemini makes contractors assess AI responses outside their expertise

Contractors working on Google’s Gemini AI system have expressed concerns over new internal guidelines that could impact the accuracy of the chatbot’s responses on sensitive topics such as healthcare.

According a report, the changes, passed down by Google to contractors employed by GlobalLogic, a Hitachi-owned outsourcing firm, have raised questions about the quality of the evaluation process for AI-generated responses. Previously, contractors were allowed to skip prompts outside their domain expertise, such as niche questions about cardiology or advanced mathematics.

However, under the new policy, evaluators are no longer permitted to skip such prompts. Instead, they are instructed to assess the parts of the response they understand and include a note indicating their lack of expertise. The updated guidelines reportedly state, 'You should not skip prompts that require specialised domain knowledge.'

This represents a shift from the earlier directive, which allowed contractors to skip tasks requiring 'critical expertise' in areas like coding or advanced scientific topics. Contractors are now only allowed to skip prompts if they are missing essential information or contain harmful content requiring special consent forms.

Concerns have been raised that the change could compromise the truthfulness and reliability of Gemini’s outputs, particularly on complex or technical subjects.

GlobalLogic employees play a crucial role in refining AI systems by evaluating responses on factors such as truthfulness. Experts warn that assigning tasks outside the evaluators’ expertise could lead to inaccuracies, especially in critical areas like healthcare, where misinformation can have serious consequences.

The updated guidelines, according to the report, have raised concerns that Gemini could be more prone to errors on sensitive topics, potentially undermining users' trust in the chatbot's capabilities.

The controversy underscores the delicate balance between technological advancement and ethical considerations in the deployment of AI tools. Critics argue that expert evaluation is essential for ensuring the safety and effectiveness of AI, particularly as such systems are increasingly used in high-stakes scenarios.

Google AI training gemini contactors Google’s Gemini AI Google gemini