Google Bard's human trainers claim they are frustrated with 'convoluted' tasks

Asking Google Bard a question on a Google Pixel 7 Pro phone
(Image credit: Nicholas Sutrich / Android Central)

What you need to know

  • Contract workers training Google's Bard claim working conditions are frustrating and six-point task instructions are "convoluted."
  • Workers are often tasked with auditing an answer within three minutes.
  • Contractors are concerned with Bard's inaccuracies, however, the guidelines state they do not need to conduct a "rigorous fact check."

AI software still needs real people to ensure it's on the right track. However, such trainers are allegedly facing unbearable conditions at Google.

According to Bloomberg, recent documents obtained by the publication suggest human trainers for Google's AI chatbot Bard have been met with "convoluted instructions." Six of the company's current contractors have come forward about the less-than-stellar working conditions, with one saying, "As it stands right now, people are scared, stressed, underpaid, don't know what's going on."

With this weighing on them, workers have stated that they are often tasked to audit an answer within three minutes. For context, these contract workers are essentially raters: people tasked with determining the relevance, authenticity, and coherency of an answer Bard may offer an inquiry based on a six-point guideline.

These trainers are also required to ensure answers don't contain anything offensive or harmful. Given that fact, the guidelines state trainers "do not need to perform a rigorous fact check." This may seem fine, however, trainers have discovered that Google's Bard is prone to getting the "main facts" wrong about a subject.

It's a headscratcher that the previously mentioned guidelines state that certain factual inaccuracies, such as getting a date wrong, are "minor."

Furthermore, raters have mentioned tasks such as determining the appropriate dosage for one looking to treat high blood pressure. Due to the sheer time constraint, workers are worried Bard is offering answers that appear correct when they're not.

In a statement, Google emphasizes that the raters are just one of the various ways the company tests responses for accuracy and quality of answers:

"We undertake extensive work to build our AI products responsibly, including rigorous testing, training, and feedback processes we've honed for years to emphasize factuality and reduce biases."

The company adds that there are other areas beyond accuracy, such as tone and presentation, that some concerned workers may have been training for.

Google seemingly rushed headlong into developing its own AI chatbot after the success of OpenAI's ChatGPT for Microsoft's Bing search engine. Not only have the human trainers stated they've had AI-prep work since January to get it ready for its public release, but a former Google engineer claimed Bard was trained using its competitor, which may have led to its quick release.

Nickolas Diaz
News Writer

Nickolas is always excited about tech and getting his hands on it. Writing for him can vary from delivering the latest tech story to scribbling in his journal. When Nickolas isn't hitting a story, he's often grinding away at a game or chilling with a book in his hand.