Generative AI development involves numerous employees, including "prompt engineers" and analysts, who assess chatbot outputs to enhance AI accuracy. However, new internal guidelines from Google regarding its Gemini project have raised concerns about the potential for disseminating inaccurate information, particularly in sensitive areas like healthcare.
Contractors from GlobalLogic, an outsourcing firm, are now required to evaluate AI-generated responses without the option to skip prompts outside their expertise. Previously, contractors could bypass prompts if they lacked relevant knowledge, such as in specialized fields like cardiology. The updated guidelines mandate that they must rate any prompt, noting their lack of expertise where applicable.
This change has sparked worries about Gemini's reliability, especially when contractors evaluate complex AI responses on topics like rare diseases without adequate background. Contractors can only skip prompts if they lack complete information or contain harmful content needing special consent. Google has not commented on these developments.