AI

Europe takes the lead in establishing AI regulations

Published

on

According to AP, the strong development of Artificial Intelligence has sparked a frenzy among users in music composition, image creation, and writing essays. However, Artificial Intelligence is also causing concern in the community about its consequences.

In an endeavor to govern this emerging technology, the European Union (EU) is actively working towards establishing regulations. Two years ago, the bloc of 27 countries introduced its initial set of regulations concerning Artificial Intelligence , with a primary focus on mitigating the risks associated with AI applications, albeit with limited coverage. Notably, advanced AI chatbots received minimal attention during that period.

Dragos Tudorache, a Romanian member of the European Parliament who leads the efforts to establish AI regulations, said: “Then, ChatGPT exploded. If there were still some people doubting whether we needed AI regulations or not, I think that doubt quickly disappeared.”

The introduction of ChatGPT last year garnered global attention, primarily due to its remarkable capacity to generate human-like responses, thanks to its extensive dataset.

Responding to mounting apprehensions surrounding this matter, European lawmakers swiftly incorporated provisions specifically addressing AI systems as they diligently refined their legislation.

The EU’s AI law could become a global standard for the development of artificial intelligence. Sarah Chander, a senior policy advisor at the digital rights group EDRi, said: “Europe is the first bloc to try to regulate AI strongly. It’s a big challenge when considering what AI can encompass.” The EU’s wide-ranging regulations on AI, which are expected to bind all providers of AI services and products, are on track to be passed by a body of the European Parliament on Thursday. After approval by this body, the draft will be sent for discussion among the EU’s 27 member states, Parliament, and the EU’s executive committee.

Concerns about AI are global.

Global authorities are actively seeking means to exert control over Artificial Intelligence in order to ensure that this technology enhances people’s lives without compromising their rights or safety. Regulators are particularly apprehensive about the emerging ethical and social risks associated with AI systems like ChatGPT, as they have the potential to significantly impact various aspects of everyday life, including work, education, copyright, and privacy concerns.

In recent developments, the White House extended invitations to senior executives from leading AI technology companies such as Microsoft, Google, and the creators of ChatGPT, OpenAI, to engage in discussions regarding these risks. Additionally, the US Federal Trade Commission issued a warning, making it clear that they will not hesitate to take strict action against any misconduct or misuse of AI technology.

China has taken a step forward by issuing a draft regulation that mandates a security assessment for any product utilizing AI systems akin to ChatGPT. In the UK, the competition watchdog has initiated an evaluation of the AI market, while Italy has imposed a temporary ban on ChatGPT due to privacy rights violations.

The development of AI has also prompted apprehension among prominent figures in the technology realm. Renowned tech leaders like Elon Musk and Apple co-founder Steve Wozniak have advocated for a six-month pause in the AI development process to thoroughly contemplate the associated risks.

Last week, prominent computer scientist Geoffrey Hinton and AI pioneer Yoshua Bengio raised concerns about the perils associated with unrestricted AI development. Their statements highlighted the importance of addressing the potential risks and implications of AI technology.

Mr. Tudorache stated that such warnings demonstrate that the EU’s move to begin building AI regulations from 2021 is “a rightful action.

Concerned with protecting the rights and privacy of users.

Recent additions to the EU’s AI Act will require “foundational” AI models like Chat GPT to disclose the copyrighted materials they used in the data collection process for Chat GPT, according to a recent draft law accessed by AP.

Foundational models, also known as large language models, are a small area of AI development. Their algorithms are built from a vast online information repository, such as blog posts, e-books, scientific papers, or a catalog of songs.

Mr. Tudorache stated: “Significant effort is required to store copyrighted materials that have been used in the process of building algorithms.” This will enable artists, writers, and other content creators to find ways to cope.

The EU adopts a risk-based approach where stringent control measures are implemented for the use of Artificial Intelligence that poses threats to individual safety or rights.

In line with this approach, the EU is expected to impose a ban on remote facial recognition. Furthermore, the practice of scanning random photos from the internet for the purpose of biometric matching and facial recognition is strongly discouraged.

The utilization of psycho-predictive practices and emotion recognition technology, except in cases of treatment or healthcare, will be significantly curtailed under the new regulations. Violations of these restrictions can lead to fines of up to 6% of a company’s global annual revenue.

While the EU law has received final approval from all pertinent agencies, it is anticipated that it will not take immediate effect. Instead, it is expected to be implemented no later than the end of this year or early 2024. This timeline allows companies and organizations a transition period to adapt and devise strategies for complying with the new regulations.

Read more about tech news here

Read more about the Artificial Intelligence Act here

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version