- The US Government launches a public task force to address the challenges and seize the opportunities of generative AI.
- It seeks to collect information and develop an essential guide for organizations that work with generative AI, with the aim of minimizing risks and guaranteeing reliability.
- These measures are taken in response to the technological lag and the need to regulate and protect the United States as a global technological power.
The US government has recognized the need for collaboration in the field of generative artificial intelligence (generative AI) and has solicited the assistance of qualified professionals from the public.
With the goal of seizing the opportunities and overcoming the challenges associated with this technology, Gina Raimondo, US Secretary of Commerce, has announced the launch of a public working group at the National Institute of Standards and Technology (NIST).
This working group will focus on generative AI technologies spanning the generation of textual content, images, videos, music, and code. In addition, his work will be to support the development of essential guidance for organizations that wish to address the risks inherent in generative AI.
Collaboration and information: Gathering technical expertise and evaluations
The working group will be made up of volunteers with technical expertise from both the public and private sectors, who will work together in a collaborative online workspace.
Your first step will be to gather information on how the NIST AI risk management framework can be used to support the development of generative AI technologies.
Subsequently, the group is expected to provide support in AI-related tests and evaluations conducted by the agency. However, the long-term goal of the group is to explore the opportunities offered by generative AI to address the most pressing challenges of our time, such as those related to health, climate change and the environment in general.
Minimizing risks and guaranteeing reliability
In a statement, Raimondo highlighted that NIST’s recently released AI risk management framework can play a crucial role in reducing the potential harm associated with generative AI technologies.
In keeping with this framework, the new public working group will be dedicated to providing essential guidance for those organizations involved in the development, implementation and use of generative AI.
These organizations have the responsibility of ensuring the reliability of the technology and its compliance with the appropriate standards.
The need for regulation and protection
The United States government has realized the importance of keeping up with the rapid advances in generative AI technology.
In April, the National Telecommunications and Information Administration requested public comment on potential regulations that would hold AI creators accountable. At the same time, the White House urged American workers to share information about how automated tools are used in their workplaces.
In June, Representatives Ted Lieu (D-CA) and Ken Buck (R-CO) introduced legislation to establish a 20-person commission that would study ways to “mitigate the risks and potential harm” of AI, while protects America’s position as a global technology powerhouse.
This legislation follows a call from Microsoft Vice President and Chairman Brad Smith in Washington, DC, where he urged the US federal government to establish a new agency tasked with regulating AI.
In an effort to address the challenges and risks associated with generative AI, these initiatives demonstrate a comprehensive approach to understanding, regulating, and making the most of this technology for the benefit of the economy, national security, and society at large.
Expert collaboration, information gathering, and the creation of essential guidelines are crucial steps in minimizing potential damage and ensuring the reliability of generative AI.