Google explains what went wrong with Gemini's AI image generation

Google Gemini screen overlay ui
(Image credit: Google)

What you need to know

  • Google formally states what went wrong with Gemini's AI image generation, which led to the company disabling the tool on February 22.
  • The company states it tuned Gemini to be more diverse, skirting problems with AI generation, however, it went too far and made the bot "cautious."
  • Google states it has taken the image generation portion of Gemini back for internal testing, saying the AI is prone to hallucinations and may still create "inaccurate" results.

Google has come forward with a formal follow-up on what happened with its Gemini model's inaccurate AI image generation for specific prompts.

According to Google, when users give the bot a prompt asking for an array of images regarding a specific culture or historical timeline, they should "absolutely get a response that accurately reflects" their intent. However, that hasn't been the case, and Google states its "tuning" measures are where things first went wrong.

The post adds, "we tuned it to ensure it doesn't fall into some of the traps we've seen in the past with image generation technology," such as abuse or explicit imagery. Unfortunately, as Google explains, their tuning hadn't accounted for situations where a wide range of diversity isn't appropriate.

The second flaw regards Gemini growing to be "cautious" and refusing prompts while treating others very sensitively. Both problems were said to have been the catalyst behind Gemini's inaccuracies as it attempted to "overcompensate."

The company states this "wasn't what we intended." Google also took a stand against intentionally looking to create inaccuracies involving historical content. To rectify the issues, Google will take Gemini's AI image generation back into a testing period to iron things out.

The post adds Google cannot promise that Gemini won't hallucinate or create "embarrassing, inaccurate or offensive results" even after the work is said and done. However, the company claims it will take appropriate action whenever problems arise.

In the meantime, Google suggests users should use Search's AI image generation as the systems it uses leverage "fresh, high-quality information" from the web.

The Gemini Era graphic from Google.

(Image credit: Google)

Google officially disabled Gemini's ability to generate AI images based on a user's prompt yesterday (Feb. 22). This was in light of a severe wave of user reports and criticism regarding the bot's inaccurate showing of historical figures and groups of people. While Google attempted to encourage Gemini to produce imagery of people with diversity, it was quickly evident that this mindset doesn't apply to all prompts.

The company stated yesterday that it "missed the mark," and now we're seeing the following steps to correct things. Google has not yet said when Gemini's AI image generation will return.

Nickolas Diaz
News Writer

Nickolas is always excited about tech and getting his hands on it. Writing for him can vary from delivering the latest tech story to scribbling in his journal. When Nickolas isn't hitting a story, he's often grinding away at a game or chilling with a book in his hand.

  • parksanim
    Does anyone actually believe this, given the quotes posted around the internet highlighting the rabid wokeness of Director Jack K. at Google? Assuming this is true?360289
    Reply
  • Jerry Hildenbrand
    Anybody who understands how ML modules are programmed and trained does. But you do you.
    Reply
  • parksanim
    Jerry Hildenbrand said:
    Anybody who understands how ML modules are programmed and trained does. But you do you.
    Exactly. Think about what you just said. Thank you for making my point.
    Reply