(AP)-Google on Friday apologized for the faulty rollout of a new artificial intelligence image-generator, accepting that in some cases the equipment would be “overcamping” in search of a diverse range of people, even when there was no such limit.
Its images put the colorful people in the historical settings, partial explanation for this, where they would not meet normally, when Google said it was temporarily preventing its Gemini Chatbot from generating any images with those people. It was a social media response to some users claiming that the tool had an anti -white bias that the way it had generated a racial diverse set of images in response to written signs.
“It is clear that the feature missed the mark,” a blog post on Friday to a senior vice -president Prabhakar Raghavan said, which runs Google’s search engines and other businesses. “Some images generated are also wrong or aggressive. We are grateful to users’ response and regret that the feature does not work well. ,
Raghavan did not mention specific examples, but this week was among those who attracted attention to social media, who portrayed a black woman as an American founding father and shown black and Asians as German soldiers of Nazi-era. Associated presses were not able to verify what signs were used to generate those images independently.
Google added a new image-generating feature to its Gemini Chatboot, which was previously known as Bard about three weeks ago. It was earlier built as a Google research experiment called Imagene 2.
Google has known for some time that such equipment may be inadvertently. In a technical paper of 2022, researchers developing imagene warned that generative AI equipment could be used to spread harassment or misinformation “and can increase many concerns about social and cultural exclusion and prejudice.” Those ideas informed Google’s decision that they do not release the “one public demo” of the imagene or its underlying code, the researchers added at that time.
Since then, the pressure to release generic AI products in public has increased as a competitive race between technical companies is trying to cash in interest in emerging technology emerging from the arrival of OpenAII’s chatbot chat.
Problems with Gemini are not the first to affect an image-generator recently. Microsoft was to accommodate his own designer tool several weeks ago, when some people were using it to create pornographic pictures of teller Swift and other celebrities. Studies have also shown that AI image-generators can increase the racial and gender stereotypes found in their training data, and without filter they are more likely to show light skinned men when asked to generate a person in various contexts.
Raghavan said on Friday, “When we built this feature in Gemini, we tuned it to ensure that this image does not fall into some nets seen in the past with generation technology – such as violent or sexually clear images, or depiction of real people,” Raghavan said on Friday. “And because our users come from all over the world, we want it to do good work for everyone.”
He said that many people want to get a series of people when asking for a picture of football players or a dog walking. But users are looking for someone in a specific breed or ethnicity or special cultural contexts, “fully receive a response that you ask what you ask accurately.”
Although it was more in response to some signs, in others it was “more alert than us and refused to fully respond to some signs – wrongly interpret some very very anodine signals as sensitive.”
He did not say what he meant, but Mithun regularly rejected the requests for some disciplines such as the instrument tests by AP, in which it refused to generate images about Arab Spring, George Floid protest or Tianmen square. In an example, Chatbot said that it did not want to contribute to the spread of misinformation or “insignificant topics”.
Most of this week about Gemini’s output originated on X, in the east TwitterAnd the social media platform owned by Elon Musk, who described Google as “crazy racist, anti-anti-programming”. Musk, who has its own AI startup, has often criticized for Hollywood’s alleged liberal bias with rival AI developers.
Raghavan said that Google would “comprehensive testing” before turning on the chatbot’s ability to show people again.
Washington researcher Sorgan Ghosh, who studied prejudice in AI image-tribes, stated on Friday that he was disappointed that Raghavan’s message ended with a disgrace that Google executive “Gemini cannot promise that Gemini could not promise sometimes shameful, wrong or aggressive results.”
For a company that has completed the discovery algorithm and “one of the largest troves of data in the world, it should be a fairly less frequent frequent results or informal results, which we can hold them accountable,” Ghosh said.
Copyright 2024 Associated Press. All rights reserved. This material cannot be published, broadcast, re -written or rearranged.
Join our newspaper for the latest news for your inbox