AI

Boss of ChatGPT: “AI has the potential to cause harm to the world”

Published

on

Sam Altman, CEO of OpenAI – the company behind ChatGPT, expresses his greatest fear about AI, stating that its potential for harm to the world is a significant concern if left unchecked.

On May 16th, Sam Altman testified before the United States Congress regarding the potential risks posed by AI. He stated, “This technology can veer off course, generate significant errors, and cause substantial harm to the world unless properly regulated.” He expressed his desire to collaborate with the government to prevent adverse scenarios in the future.

When Senator Josh Hawley raised concerns about the potential manipulation of individuals by large language models like ChatGPT, Altman responded, “I worry about that,” and referenced the emergence of Photoshop in the early 2000s. At that time, many people were deceived by manipulated images before becoming aware of the practice of photo editing.

Hawley also listed potential negative impacts of AI, such as job loss, threats to privacy, manipulation of individual behavior, and possible disruptions to elections in the United States. However, the CEO of OpenAI believes that AI will create more jobs rather than destroy them. He stated, “We are optimistic that there will be amazing jobs in the future, and current jobs can be significantly improved with the help of ChatGPT.”

Sam Altman, CEO of OpenAI, during his testimony before the United States Congress on May 16th. Photo: Bloomberg

Regarding AI regulation, both researcher Gary Marcus and Sam Altman agree on the establishment of a new agency to oversee this technology. Altman calls for companies developing AI to publicly disclose their models and underlying data. AI creators should obtain operating licenses or demonstrate the safety of their products before releasing them to the public. Independent audits of AI models are also necessary.

The emergence of a series of AI models in recent times has brought about significant changes in society. These tools can generate text and visual content autonomously, assist doctors in communicating with patients, and provide quick responses to complex questions. However, the race among large technology companies has also raised concerns.

Geoffrey Hinton, one of the pioneers in AI, announced his departure from Google in order to publicly raise awareness about the dangers of the technology.

When AI starts writing its own code and running its own programs, real-life killer robots will emerge. AI has the potential to be smarter than humans. I was wrong to think that it would take 30-50 years for AI to make such progress. But now, everything is changing too fast,” Hinton said.

In March, Elon Musk and several technology experts signed a letter calling on governments to enact a ban on the development of AI models more powerful than GPT-4 for six months. However, during the testimony on May 16th, the participants acknowledged the difficulty of restraining the AI explosion. Companies and investors are pouring billions of dollars into this technology.

“There will be no pause. No regulatory body can halt the progress of AI,” stated Senator Cory Booker.

3 Comments

  1. Pingback: Worry about the AI catastrophe - 89crypto.com

  2. Pingback: ChatGPT's owner: 'AI will surpass humans in 10 years'

  3. Pingback: Experts warn of the risk of extinction due to AI. - 89crypto.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version