Tips & Tricks

How to distinguish AI-generated photos?

Published

on

An image is the product of artificial intelligence (AI) regarding the explosion near the Pentagon, which was shared on American social media on May 22nd, causing a brief stock market plunge. This incident has raised concerns about misinformation originating from AI

A fake AI-generated photo of an explosion near the Pentagon. Photo: CNN

Social media detectives, including Nick Waters from Bellingcat – an online news verification group – quickly pointed out several notable issues with the image. Firstly, there were no direct witnesses to authenticate the event. The building in the photo also did not resemble the Pentagon, which could be easily verified using tools like Google Street View for comparison. The presence of anomalies such as floating lamp posts and black columns protruding from the sidewalk indicated that this image was not realistic.

There are several AI reconstruction tools such as Midjourney, Dall-e 2, and Stable Diffusion that can generate lifelike images. These tools are trained by examining a large volume of real-world images. However, in the absence of data, these tools fill in the gaps with their own interpretations. This can lead to AI-generated characters having additional limbs and objects being distorted with the surrounding environment.

Another AI-generated fake photo of billionaire Elon Musk and GM CEO Mary Barra.
Photo: DW

Al Jazeera has outlined some measures to distinguish between AI-generated images and real photos of major events when they appear on the internet.

In the case of an explosion or significant event, there is usually factual information from multiple people and different perspectives.

• Who is uploading the content? It is important to examine the posting history of the user account.The location of event and their location. Who are they following, and who is following them? Can you reach out to them or engage in a conversation?

• Reverse image search tools like Google Images and TinEye can allow you to upload an image and determine its location and the first time it was used. There are other tools you can use, such as viewing the live feed of public traffic cameras to verify that an event is taking place.

• Image and environmental analysis: Look for clues in the image such as nearby landmarks, traffic signs, and even weather conditions to help determine the location or the time when the event may have occurred.

• For images involving humans, pay special attention to their eyes, hands, and overall posture. AI-generated videos that mimic humans, known as deepfakes, tend to have issues with blinking because most training datasets do not include closed-eye faces. Hands may not grip objects correctly, or limbs may appear unnaturally distorted. Additionally, the DW channel (Germany) states that other common errors in AI-generated images include people having too many teeth or distorted eyeglass frames, and unrealistic-shaped ears.

• The skin of individuals in AI-generated images is often smooth, and even their hair and teeth are flawlessly hyper-realistic. Many of these images also possess artistic qualities, appearing glossy and shimmering, which even professional photographers find challenging to achieve in a studio setting. In some cases, AI programs duplicate people and objects, using them twice. Blurry backgrounds in AI-generated images are also not uncommon.

1 Comment

  1. Pingback: AI only needs to listen to the sound of keystrokes to predict the content, achieving an accuracy rate of up to 95% - 89crypto.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version