WASHINGTON, DC - JANUARY 10: Danielle Coffey, President and CEO of News Media Alliance, Professor Jeff Jarvis, CUNY Graduate School of Journalism, Curtis LeGeyt President and CEO of National Association of Broadcasters, Roger Lynch CEO of Condé Nast, are strong in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on "Artificial Intelligence and The Future Of Journalism" at the U.S. Capitol on January 10, 2024 in Washington, DC. Lawmakers continue to hear testimony from experts and business leaders about artificial intelligence and its impact on democracy, elections, privacy, liability and news.
(Photo : (Photo by Kent Nishimura/Getty Images))

AI chatbot Gemini faces restrictions from Google amid global elections set to happen later this year. The Alphabet-owned firm announced the restrictions to avoid potential slip-ups with the new technology.

The updates come on the heels of multiple advancements in generative AI. This includes image and video generation, which has seen a rise in concern over misinformation and fake ideas presented to the public.

Governments have stepped in to regulate the situation by implementing certain restrictions within the advanced technology system.

Restrictions within the United States were introduced in December and are said to take effect ahead of the 2024 election.

"In preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we are restricting the types of election-related queries for which Gemini will return responses," a company spokesperson said on Tuesday.

For instance, when asked about the upcoming US presidential match-up between Joe Biden and Donald Trump, Gemini responds, "I'm still learning how to answer this question. In the meantime, try Google Search," reported Reuters.

The US is not the only nation holding elections this year, as they are taking shape in several other large countries, including South Africa and India, the world's largest democracy.

India has requested tech firms look to government approval before releasing any AI developments to the public, especially ones deemed "unreliable" or still in its experimental phase, and to alert users of potential misgiven information.

Last month, Google faced backlash after it generated "inaccurate" historical depictions of individuals, forcing it to pause the chatbot's image-generation feature.

CEO Sundar Pichai had said the company was working to fix those issues and called the chatbot's responses "biased" and "completely unacceptable."

Facebook-parent Meta Platforms also revealed its plan to set up a team to specifically tackle disinformation and monitor any abuse of generative AI in the run-up to European Parliament elections in June.