The largest regulatory threat to the Microsoft-backed business yet comes from the US Federal Trade Commission, which has launched an inquiry into OpenAI over allegations that it violated consumer protection rules by putting users' personal information and reputations in danger.

The manufacturer of the generative artificial intelligence chatbot ChatGPT, OpenAI, was handed a 20-page request for records by the FTC this week regarding how it manages risks associated with its AI models, as reported by Reuters.

AI and Concerns Over Misinformation

ISRAEL-SCIENCE-TECHNOLOGY-AI
(Photo : by JACK GUEZ/AFP via Getty Images)
The logo of US artificial intelligence company OpenAI is pictured during a talk by its co-founders at the campus of Tel Aviv University in Tel Aviv on June 5, 2023.

The FTC has questioned OpenAI about its efforts to address the possibility that its products "generate statements about real individuals that are false, misleading, or disparaging."

The company's latest technology, GPT-4, was built on years of safety research. OpenAI CEO Sam Altman also noted in a series of tweets on Thursday that the systems were created to learn about the world rather than specific individuals.

In November, OpenAI released ChatGPT, captivating users and inspiring competition among major tech firms to show how their AI-infused solutions could transform societies and businesses.

The AI race has stoked widespread worries about potential hazards and governmental scrutiny of the technology. Global regulators want to apply current laws governing copyright and data privacy to two crucial issues: the information put into models and the material they produce.

Chuck Schumer, the majority leader in the Senate in the US, has called for "comprehensive legislation" to promote and ensure AI safety measures and has promised to host several events later this year to "lay down a new foundation for AI policy."

In March, OpenAI encountered issues in Italy when the authority ordered the removal of ChatGPT due to claims that the business had broken the EU's GDPR, a comprehensive privacy law implemented in 2018.

After the American corporation agreed to include age verification tools and allow European users to opt out of having their information used to train the AI model, ChatGPT was later reinstituted.

Read Also: Elon Musk 'TruthGPT' Seeks To Rival OpenAI's ChatGPT, Google

AI and Its Safety Today

One of the biggest concerns about AI is its potential to be used for malicious purposes. For example, AI could be used to develop autonomous weapons systems that could kill without human intervention.

The technology could also be used to spread misinformation or propaganda or to manipulate people's behavior.

There is also the potential for AI to be biased since they are trained on the data. This could lead to AI systems making discriminatory decisions or using AI systems to perpetuate stereotypes.

Finally, there is the concern that AI could become so intelligent that it surpasses human intelligence, whereas AI systems become so powerful that they threaten humanity.

While these are all valid concerns, it is essential to remember that AI is still a developing technology. Many people are working to address the safety risks associated with AI, and these risks will likely be mitigated in the future.

Related Article: Apple Limits Employee Access to OpenAI's ChatGPT, Raising Questions about AI Ethics