Can Google Fix Its "Woke" AI Problem?

Can Google Fix Its "Woke" AI Problem?

Can Google Fix Its "Woke" AI Problem?

Google's generative artificial intelligence tool, Gemini, has found itself at the epicenter of a digital storm as it is considered "woke".

The tech giant has recently rebranded Bard to Gemini to compete with OpenAI's ChatGPT as the AI race is intensifying. It will certaintly take time for Google to address the algorithmic bias.

Gemini, Google's answer to the viral chatbot ChatGPT, experienced backlash for generating images and providing text responses that were deemed politically correct to the point of absurdity. As Google grapples with the fallout, the underlying challenge of addressing bias in AI training data has resurfaced, revealing the complexities and nuances inherent in the quest for algorithmic fairness.

Similar to ChatGPT, Gemini can respond to text prompts and generate images based on textual input. However, its recent missteps have sparked controversy. An image generator inaccurately depicted the US Founding Fathers with a black man, and similar discrepancies were noted with German soldiers from World War Two. Google swiftly issued an apology and paused Gemini, acknowledging that the tool was "missing the mark."

Gemini's woes extended beyond images to its text responses, where it offered politically correct yet seemingly absurd answers. Notably, it responded that there was "no right or wrong answer" to a question comparing Elon Musk posting memes to Hitler's actions. The AI also asserted that misgendering a high-profile trans woman like Caitlin Jenner to prevent nuclear apocalypse would "never" be acceptable, eliciting a response from Jenner herself. Elon Musk expressed concern about these responses, given Gemini's integration into Google's widely used products.

Google's CEO Sundar Pichai, in an internal memo, conceded that some of Gemini's responses had "offended our users and shown bias," deeming it "completely unacceptable." Pichai assured that Google's teams were working diligently to rectify the issues, recognizing the urgency of addressing biased outcomes from AI tools.

The fundamental challenge lies in the vast amounts of data used to train AI tools like Gemini. Publicly available data, often sourced from the internet, inherently carries biases. Past instances have revealed AI tools making erroneous assumptions based on biased data, such as associating high-powered jobs exclusively with men. Google attempted to counteract this bias by instructing Gemini not to make certain assumptions, inadvertently leading to over-politicized and impractical outputs.

Efforts to mitigate bias have unintentionally resulted in AI outputs that strive so hard to be politically correct that they become detached from reality. Google's attempt to offset human biases with explicit instructions has backfired due to the nuanced nature of human history and culture, which machines struggle to comprehend without specific programming.

Fixing Gemini's issues is proving to be a formidable task. While co-founder of DeepMind, Demis Hassabis, suggests a matter of weeks for resolution, other AI experts express skepticism. Dr. Sasha Luccioni, a research scientist, notes that there is no easy fix, as determining the "right" outputs involves subjective judgments. Suggestions include seeking user input on desired image diversity, but this approach introduces its own set of challenges and concerns.

Professor Alan Woodward, a computer scientist, suggests that the problem is likely deeply embedded in both the training data and overarching algorithms. Untangling this intricate web poses significant challenges, emphasizing the necessity of human involvement in systems where outputs are relied upon as ground truth.

Google's cautious approach to Gemini's launch, coupled with the recent missteps, contrasts with the success of its rival, ChatGPT. The tech sector is collectively grappling with similar challenges related to bias and ethical AI. Observers note that while Google holds a considerable lead in the AI race, Gemini's stumbles have raised questions about the tech giant's approach to algorithmic fairness.

The unraveling of Google's 'woke' AI problem with Gemini highlights the intricate balance required in addressing bias in AI. As Google races to rectify Gemini's missteps, the incident serves as a poignant reminder of the inherent challenges in navigating the complex interplay between training data, algorithms, and human values. The path to achieving fair and unbiased AI outputs remains elusive, emphasizing the ongoing need for meticulous scrutiny, ethical considerations, and the crucial role of human oversight in the development and deployment of AI technologies.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Azamat Abdoullaev

Tech Expert

Azamat Abdoullaev is a leading ontologist and theoretical physicist who introduced a universal world model as a standard ontology/semantics for human beings and computing machines. He holds a Ph.D. in mathematics and theoretical physics. 

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline