Renowned Tech Experts Highlight Critical Risks of AI, Suggest Possible Regulations
(Photo : Win McNamee/Getty Images)
Effective management and regulation of AI has become a major issue as it advances rapidly.
  • Leading tech experts highlight critical risks of AI, including systemic bias, disinformation, cyberattacks, and weaponization.
  •  OpenAI and Microsoft propose the creation of a government regulatory body to oversee licensing, testing, safety requirements, and disclosure guidelines for AI research.
  •  Experts emphasize the urgent need for regulations to address the risks and ensure responsible and ethical AI development and usage.

As artificial intelligence (AI) advances rapidly, the question of effective management and regulation of this powerful technology has taken center stage. Leading tech giants, including Microsoft and OpenAI, have put forth a proposal for a government regulatory body that would oversee licensing, testing, safety requirements, and disclosure guidelines for AI research.

Renowned experts such as Dan Hendrycks, director of the Center for AI Safety, have emphasized the urgent need for regulation, highlighting the critical risks associated with AI, such as systemic bias, disinformation, malicious usage, cyberattacks, and weaponization, according to NPR.

During his Senate testimony, OpenAI CEO Sam Altman stressed the importance of implementing rules and regulations for AI, advocating for creating an AI regulatory body and mandatory licensing for companies. Other panelists echoed the significance of transparency in training data and the development of clear guidelines to address AI-related hazards.

Furthermore, there is growing concern about the emergence of a new tech monopoly in the AI industry due to the economic dynamics of developing large-scale AI models.

Microsoft Assisting Governments in Regulating AI

Microsoft President Brad Smith has dubbed AI regulation the critical challenge of the 21st century, CNN reported. Last month, he presented a comprehensive strategy for democratic nations to address the risks posed by AI while promoting a liberal vision for the technology.

Microsoft's position reflects its aim to influence ongoing government efforts, particularly in Europe and the United States, to regulate AI before it disrupts society and the economy.

Smith drew parallels between the impact of AI and that of the printing press, emphasizing its potential to streamline policymaking and constituent engagement. He called for comprehensive laws encompassing all aspects of AI's life cycle and supply chain, including data centers and end users like banks and hospitals.

Read Also: Twitter Content Safety Chief Quits

The G7 countries discussed the challenges posed by generative AI technologies like ChatGPT. Topics of discussion included intellectual property protection, disinformation, and technological governance, per Reuters. The establishment of the "Hiroshima AI Process," an international forum dedicated to addressing the challenges brought about by rapidly advancing AI systems, was decided upon by G7 leaders.

Lawmakers and officials worldwide are taking steps to address the issues raised by Altman's testimony. The European Union has enacted the AI Act, categorizing AI applications into three risk levels: unacceptable, high, and low or minimal.

AI Regulation: A Collaborative Effort Across Sectors

The US agency, the National Institute of Standards and Technology (NIST), has developed an AI risk management framework with input from industry groups, technology companies, and think tanks.

Federal organizations, such as the Federal Trade Commission and the Equal Employment Opportunity Commission, have promulgated AI risk guidelines. Other bodies, like the Consumer Product Safety Commission, also have a role in addressing these concerns on AI risks, according to Gizmodo.

Whether a new government agency dedicated to regulating AI is necessary has become a prominent topic of debate in Washington. IBM and other companies argue that existing government agencies, with their sector-specific expertise, should be responsible for AI regulation.

Additionally, Smith urged US President Joe Biden to issue an executive order mandating the utilization of the risk management framework of the NIST when procuring AI products, which provides guidelines for responsible and ethical AI usage.

Related Article: Microsoft is Killing Windows Cortana App in Late 2023