U.S. Forms AI Safety Consortium with Leading Companies to Address Risks

The Biden administration has announced the formation of the U.S. AI Safety Institute Consortium (AISIC), comprising over 200 entities, including prominent artificial intelligence (AI) companies and government agencies. The consortium aims to promote the safe development and deployment of generative AI technology.

Composition of the Consortium:
The AISIC includes industry leaders such as OpenAI, Google, Anthropic, Microsoft, Meta Platforms (formerly Facebook), Apple, Amazon, Nvidia, Palantir, Intel, JPMorgan Chase, and Bank of America, among others. Academic institutions, government agencies, and companies from various sectors are also part of the consortium.

Objectives and Scope:
Led by the U.S. AI Safety Institute (USAISI), the consortium will focus on priority actions outlined in President Biden’s AI executive order. This includes developing guidelines for red-teaming, capability evaluations, risk management, safety, security, and watermarking synthetic content.
Red-teaming, a concept borrowed from cybersecurity, involves simulating potential risks and threats to identify vulnerabilities and enhance preparedness.

Government’s Role and Initiatives:
Commerce Secretary Gina Raimondo emphasized the government’s role in setting standards and tools to mitigate AI risks while harnessing its potential.
President Biden’s executive order directed agencies to establish standards for testing AI systems and address related cybersecurity risks.

Development of Standards and Guidance:
The Commerce Department initiated steps to draft key standards and guidance for the safe deployment and testing of AI. This includes efforts to create a foundation for a new measurement science in AI safety.

Challenges and Future Directions:
Generative AI technology, capable of creating text, photos, and videos based on open-ended prompts, has raised concerns about job displacement, electoral integrity, and potential risks to human autonomy.
Despite ongoing efforts to establish safeguards, legislative action in Congress addressing AI has faced obstacles, with attempts to pass relevant legislation stalling.

The establishment of the AISIC represents a significant collaboration between industry stakeholders and government entities to address the challenges and risks associated with AI technology. As the consortium begins its work, it aims to develop comprehensive guidelines and standards to ensure the safe and responsible deployment of AI systems.

Source: Adapted from Reuters

Looking for the latest news delivered straight to your inbox?
Subscribe to the daily PIXLNEWS newsletter for curated updates and offers.