AI

The great danger of AI.

Published

on

Microsoft President Brad Smith believes that deepfakes, a technology that creates realistic fake videos and images, pose the greatest danger of AI today.

Speaking at an event in Washington this week, Smith expressed his major concern about the increasing presence of deepfake content in imitating and impersonating the appearance and voices of others.

“We need to address the issues surrounding deepfakes,” emphasized Microsoft’s President. “Steps must be taken to counter the legitimate manipulation of content to deceive through AI.”

Deepfake is a term that refers to the combination of deep learning and fake, where AI is used to analyze the gestures, facial expressions, and voice of a person, and then reproduce and manipulate them to create authentic-looking images or videos. This technology has been around for several years. However, the AI-generated media frenzy, where artificial intelligence can create content such as images, videos, code, and text, has accelerated the spread of deepfakes.

The misuse of these tools for spreading misinformation has become a concern for governing bodies. For instance, back in March, Midjourney was used to create an image depicting former US President Donald Trump being arrested, and another image showing Pope Francis wearing trendy white clothing. These images were widely shared on social media, with a majority of users believing them to be real, as reported by The Washington Post.

The photo of Mr. Trump being arrested was created by Deepfake and had millions of views in just a few minutes.

According to Brad Smith, users need to be guided in discerning AI-generated content, such as labeling notifications when encountering images and videos created by artificial intelligence. He also proposes tightening export regulations in the United States to prevent AI models from falling into the hands of third parties.

During a congressional hearing on May 16th, Sam Altman, CEO of OpenAI, also acknowledged that artificial intelligence is a breakthrough but can become a danger if it falls into the wrong hands. It “can go astray, create mistakes, and cause significant harm to the world if not properly regulated,” he said. He expressed a desire to collaborate with the government to prevent malicious scenarios in the future and to promote the establishment of a regulatory agency for this technology.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version