AI

Google, Meta, Microsoft, OpenAI… agree with voluntary measures to protect AI.

Published

on

Although many tech giants have signed commitments to protect AI, the principles of the United States government do not impose any penalties for companies that do not comply with these principles.

Recently, some leading U.S. companies in the field of AI have agreed to collaborate with the U.S. government and commit to implementing certain principles to ensure public trust in AI.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI have all signed commitments to make AI safe, secure, and reliable. In May, the Biden administration stated that they would meet with top developers to ensure their alignment with U.S. policies.

The commitments are non-binding and carry no penalties for non-compliance. The policies also cannot affect AI systems that have already been deployed. One of the terms states that companies will commit to conducting security assessments, both internally and externally, before release.

However, the newly agreed-upon commitments by several tech companies are aimed at reassuring the public (and to some extent lawmakers) that AI can be deployed responsibly. The Biden administration has proposed using Artificial Intelligence in the government to streamline tasks.

Perhaps the most immediate impact will be felt in the realm of AI-generated artworks, as all parties have agreed to use digital watermarking to identify whether an artwork is created by AI or not. Some services, such as Bing Image Creator, have already implemented this process.

All the tech giants that signed the commitments also pledge to use AI for public benefits, such as cancer research, as well as identifying appropriate and inappropriate use cases. While this is yet to be defined, it may include existing protective measures that prevent tools like ChatGPT from being misused, for example, by adversaries planning terrorist attacks.

Companies also commit to protecting data privacy, a priority that Microsoft has upheld in its enterprise versions of Bing Chat and Microsoft 365 Copilot.

All companies have committed to conducting internal and external security assessments of their systems before release and sharing information with industries, governments, the public, and academia about AI risk management. They also commit to allowing third-party researchers access to explore and report vulnerabilities.

Microsoft’s President, Brad Smith, has agreed to the new commitments, noting that Microsoft supports the establishment of a national regulatory agency for high-risk AI systems. Google has also disclosed that they have formed a team to attempt to break AI through attacks.

OpenAI stated in a statement, “As part of our mission to build safe and beneficial AGI, we will continue to test and fine-tune specific governance measures tailored to high-capability foundational models like the ones we produce. We will also continue investing in research in areas that can help inform regulation, such as techniques for assessing potential risks in AI models.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version