New X Platform Feature Raises Questions: Elon Musk Warns About “Edited Images”


A newly launched feature on the X platform (formerly Twitter) has sparked widespread discussion, particularly because it allows users to modify images on certain posts belonging to other accounts.

Elon Musk issued a warning about this feature through his official X account, posting: “Warning: Edited images,” while reposting an announcement about the new feature from the anonymous account DogeDesigner.

The DogeDesigner account is known for sharing updates and news about X’s latest features. According to its post, the newly introduced feature could make it harder for traditional media outlets to publish misleading or manipulated photos and videos.

It is worth noting that in the past, X (formerly Twitter) was known for labeling tweets that used altered, fake, or fabricated media instead of removing them entirely.

Image Labeling Feature on X

According to TechCrunch, back in 2020 Yoel Roth, former Head of Site Integrity, explained that X’s policy was not limited only to artificial intelligence content. It also covered practices such as:

  • Selective editing

  • Cropping

  • Slowing footage

  • Adding voice dubbing

  • Manipulating translations

Technology experts believe that X may have recently introduced significant changes to address AI-related challenges, including a policy against sharing non-original media. However, this policy is reportedly rarely enforced.

The platform is expected to follow a specific process to identify AI-generated content. Nevertheless, Elon Musk has not disclosed how this process works, nor clarified whether he is referring specifically to AI-generated images or to any content that is not directly uploaded from a smartphone camera to X.

The Problem of Edited Media Across Platforms

X is not the only platform struggling with edited or AI-altered media.

Meta previously faced criticism due to issues with labeling edited photos as AI-generated, even when the images were authentic and not created using generative AI. Eventually, Meta adjusted its labels to read “AI information” instead of explicitly stating that the images were “AI-generated.”

Adobe also encountered similar challenges. Images edited using tools to remove minor elements—such as wrinkles in clothing or unwanted reflections—were sometimes automatically classified as “AI-generated.”

Today, there is an organization dedicated to setting standards for verifying the authenticity and source of digital content, known as C2PA (Coalition for Content Provenance and Authenticity).


Post a Comment

Previous Post Next Post