In a new report from Reuters , Meta will now begin detecting and labeling AI-generated images from other companies' AI services. To do this the company will be using a set of invisible markers that are built into the files.

Black Celebs Targeted With Fake AI-Generated Images—Spreading False Rape, Sex Trafficking Accusations
(Photo : KIRILL KUDRYAVTSEV/AFP via Getty Images)
A photo taken on November 23, 2023 shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, western Germany.

The company has already been labeling content using its own AI tools so this will most likely be an add-on to those methods.

Meta will be applying the labels to all content posted to Instagram, Facebook, and Threads that carry the markers to signal users that the content is AI-generated as many of the images already appear as real photos. Once the new system is implemented the company plans to use these methods for images that are created on separate platforms such as OpenAI, Microsoft, MidJourney, Adobe, and Shutterstock to name several. This new development can finally inform users and the general public about how the company plans to combat the potential harm from these images as they can generate false or misleading information in addition to potentially violating a person's privacy.

However, in an interview with Reuters, the company's president of Global Affairs, Nick Clegg stated that there is still no method to target written pieces that have been generated by AI software such as OpenAI's ChatGPT. "That ship has sailed," Clegg told reporters. However, Clegg did emphasize the company's methods to tackle AI-generated images stating "Even though the technology is not yet fully mature, particularly when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow".

To combat AI-generated audio and videos Clegg told reporters that the company will require users to label their own altered audio as well as videos and that there will be consequences should they fail to do so. Although those consequences remain unspecified, the company has stated that it plans to develop methods to label audio and videos but stated that developing those tools is more complicated and will take some time. In the meantime, having users label the content themselves will be the primary method until the company develops its own.

As Artificial Intelligence continues to grow and develop it's becoming more and more clear that lines will be drawn to protect users and to combat the spread of misinformation as well as the violation of one's privacy. However, AI is still being integrated into many industries such as gambling and even military forces. Once again showing that while AI is facing pushback it's still going to be here for the long haul.