Microsoft engineer Shane Jones sounded the alarm about offensive, harmful, and obscene imagery he said was too easily made by the company's artificial intelligence (AI) image generator tool, sending letters on Wednesday (Mar. 6) to US regulators and the tech giant's board of directors, urging them to take action.

Jones told the Associated Press that he considered himself a whistleblower and had also informed the US government about his concerns with Microsoft's Copilot Designer, particularly meeting with US Senate staffers and sending a letter to the Federal Trade Commission (FTC).

The FTC confirmed that they received Jones's letter on Wednesday but declined to comment further about it.

Microsoft said it was committed to addressing employee concerns about company policies and appreciated Jones's "effort in studying and testing our latest technology to further enhance its safety." The company also recommended that Jones use its own "robust internal reporting channels" to investigate and address the problems.

Read Also: Billions of AI Chip Manufacturing Subsidies Leave US Officials With Tough Decisions


(Photo: Alastair Grant/AP)

Publicizing Grievances

CNBC was the first to report on the letters. Microsoft reportedly referred Jones to OpenAI. When he did not hear back from the firm, he posted an open letter on LinkedIn asking the startup's board to take down DALL-E 3 for an investigation.

He said Microsoft's legal department told Jones to remove his post immediately, and he complied. However, he has since escalated the matter to FTC chair Lina Khan and Microsoft's board of directors.

"Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place," Jones wrote in his letter to Khan.

He added that since Microsoft has "refused" such a recommendation, he called on the company to add disclosures to the product and change the rating on Google's Android app to clarify that it was only for mature audiences.

"Again, they have failed to implement these changes and continue to market the product to 'Anyone. Anywhere. Any Device,'" he added in the letter.

His public letters came after Google late last month temporarily sidelined its AI image generator, which is part of its Gemini AI suite, following user complaints of inaccurate photos and questionable responses stemming from their queries.

Related Article: OpenAI Fires Back At Musk Supporting Company Transition to For Profit, Vows to Dismiss Lawsuit