According to the president of Google's cloud computing division, the company is having fruitful initial discussions with regulators in the European Union about the EU's ground-breaking artificial intelligence legislation and how it and other businesses might develop AI safely and ethically.

Google is developing tools to address a number of AI-related concerns, including the worry that it may become more difficult to distinguish between information created by people and that created by AI, according to CNBC.

A Watermarking System

The search giant is developing tools that will allow users to discern between material produced by humans and artificial intelligence. A "watermarking" system that marks AI-generated photographs was unveiled by the company.

In advance of any rules on the technology, it suggests that Google and other significant tech corporations are working on ways to bring private sector-driven oversight to AI.

With tools like ChatGPT and Stability Diffusion, AI systems are developing at a fast pace and producing results that go beyond what was previously possible with the technology. Computer programmers are increasingly using ChatGPT and similar technologies as their companions, for example, to assist them in generating code.

Read also: Google AI Search Engine Reportedly in the Works

EU Employs More Control Over AI Deployment

However, generative AI models have lowered the barrier to the mass production of content based on copyright-infringing material, which might hurt artists and other creative professionals who depend on royalties for a living.

US-TECHNOLOGY-LIFESTYLE-ELECTRONICS

(Photo: by PATRICK T. FALLON/AFP via Getty Images) Made by Google devices, including Google Assistant and Google Home devices for connected smart homes, are demonstrated at Alphabets Google Android plaza booth during the Consumer Electronics Show (CES) in Las Vegas, Nevada, on January 5, 2023.

This is a major concern for EU policymakers and authorities further afield. Huge collections of freely accessible internet data, most copyright-protected, are used to train generative AI models.

Members of the European Parliament approved legislation earlier this month with the intention of adding control to AI deployment across the Union. The EU AI Act contains rules to ensure that the training data used by generative AI tools does not infringe upon copyright rights.

AI Becomes a Battleground in the Digital Industry

As businesses compete for a leading position in the development of technology, particularly generative AI, which can generate new content from user inputs, AI has emerged as a major battleground in the global digital industry.

Academics and boardrooms have been astounded by what generative AI is capable of, from writing music lyrics to creating code.

However, it has also given rise to concerns about job loss, false information, and bias. Some of Google's top researchers and staff members have expressed alarm about how swiftly AI is developing.

For instance, Google staffers criticized the company's unveiling of Bard, its generative AI chatbot to compete with Microsoft-backed OpenAI's ChatGPT, as "rushed," "botched," and "un-Googley" in posts on the internal forum Memegen.

The U.K. is among other countries rushing to regulate AI and has, instead of enacting its own formal standards, established a framework of AI principles for regulators to implement. In the United States, the administration of President Joe Biden and other U.S. government agencies have also suggested regulatory frameworks for AI.

But the main complaint among those who work in the tech sector is that regulators don't react to new technologies quickly enough. Because of this, many businesses are developing their own methods for putting barriers around AI rather than waiting for appropriate legislation to be passed.

Related article: Google Engineer Claims AI Chatbot Is 'Sentient,' Compares It to a 'Precocious Child'