Tag Archives: Artificial Intelligence

FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence 

This fact sheet detailing President Biden’s Executive Order on Safe, Secure and Trustworthy Artificial Intelligence was provided by the White House:

President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition and advances American leadership around the world. (Karen Rubin/news-photos-features.com via c-span)

Today, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.

As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.

“President Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security, and trust. It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks,” stated White House Deputy Chief of Staff Bruce Reed.

The Executive Order directs the following actions:

New Standards for AI Safety and Security

As AI’s capabilities grow, so do its implications for Americans’ safety and security. With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems:

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public. 
    • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.
    • by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.
    • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
    • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
    • Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their mission, and will direct actions to counter adversaries’ military use of AI.

Protecting Americans’ Privacy

Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions:

  • Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data. 
    • Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.
    • Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks. This work will focus in particular on commercially available information containing personally identifiable data.
    • Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.These guidelines will advance agency efforts to protect Americans’ data.

Advancing Equity and Civil Rights

Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety. To ensure that AI advances equity and civil rights, the President directs the following additional actions:

  • Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.
    • Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
    • Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.

Standing Up for Consumers, Patients, and Students

AI can bring real benefits to consumers—for example, by making products better, cheaper, and more widely available. But AI also raises the risk of injuring, misleading, or otherwise harming Americans. To protect consumers while ensuring that AI can make Americans better off, the President directs the following actions:

  • Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harmsor unsafe healthcare practices involving AI.
    • Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.

Supporting Workers

AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement. To mitigate these risks, support workers’ ability to bargain collectively, and invest in workforce training and development that is accessible to all, the President directs the following actions:

Promoting Innovation and Competition

America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined. The Executive Order ensures that we continue to lead the way in innovation and competition through the following actions:

  • Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change.
    • Promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities.
    • Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.

Advancing American Leadership Abroad

AI’s challenges and opportunities are global. The Biden-Harris Administration will continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide. To that end, the President directs the following actions:

  • Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department in collaboration with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak.
    • Accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable.
    • Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure.

Ensuring Responsible and Effective Government Use of AI

AI can help government deliver better results for the American people. It can expand agencies’ capacity to regulate, govern, and disburse benefits, and it can cut costs and enhance the security of government systems. However, use of AI can pose risks, such as discrimination and unsafe decisions. To ensure the responsible government deployment of AI and modernize federal AI infrastructure, the President directs the following actions:

  • Issue guidance for agencies’ use of AI, includingclear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment. 
    • Help agencies acquire specified AI products and services faster, more cheaply, and more effectively through more rapid and efficient contracting.
    • Accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies will provide AI training for employees at all levels in relevant fields.

As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI. The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.

The actions that President Biden directed today are vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.

For more on the Biden-Harris Administration’s work to advance AI, and for opportunities to join the Federal AI workforce, visit AI.gov.

Biden-Harris Administration, DARPA Launch $20 Million Artificial Intelligence Cyber Challenge to Protect America’s Critical Software

Several leading AI companies – Anthropic, Google, Microsoft, and OpenAI – to partner with DARPA in major competition to make software more secure

The Biden-Harris Administration has  launched a major two-year competition that will use artificial intelligence (AI) to protect the United States’ most important software, such as code that helps run the internet and our critical infrastructure.  The “AI Cyber Challenge” (AIxCC) will challenge competitors across the United States, to identify and fix software vulnerabilities using AI. Led by the Defense Advanced Research Projects Agency (DARPA), this competition will include collaboration with several top AI companies – Anthropic, Google, Microsoft, and OpenAI – who are lending their expertise and making their cutting-edge technology available for this challenge. This competition, which will feature almost $20 million in prizes, will drive the creation of new technologies to rapidly improve the security of computer code, one of cybersecurity’s most pressing challenges. It marks the latest step by the Biden-Harris Administration to ensure the responsible advancement of emerging technologies and protect Americans.

The Biden-Harris Administration announced AIxCC at the Black Hat USA Conference in Las Vegas, Nevada, the nation’s largest hacking conference, which for decades has produced many cybersecurity innovations. By finding and fixing vulnerabilities in an automated and scalable way, AIxCC fits into this tradition. It will demonstrate the potential benefits of AI to help secure software used across the internet and throughout society, from the electric grids that power America to the transportation systems that drive daily life.

DARPA will host an open competition in which the competitor that best secures vital software will win millions of dollars in prizes. AI companies will make their cutting-edge technology—some of the most powerful AI systems in the world—available for competitors to use in designing new cybersecurity solutions. To ensure broad participation and a level playing field for AIxCC, DARPA will also make available $7 million to small businesses who want to compete.

Teams will participate in a qualifying event in Spring 2024, where the top scoring teams (up to 20) will be invited to participate in the semifinal competition at DEF CON 2024, one of the world’s top cybersecurity conferences. Of these, the top scoring teams (up to five) will receive monetary prizes and continue to the final phase of the competition, to be held at DEF CON 2025. The top three scoring competitors in the final competition will receive additional monetary prizes.

The top competitors will make a meaningful difference in cybersecurity for America and the world. The Open Source Security Foundation (OpenSSF), a project of the Linux Foundation, will serve as a challenge advisor. It will also help ensure that the winning software code is put to use right away protecting America’s most vital software and keeping the American people safe.

Today’s announcement is part of a broader commitment by the Biden-Harris Administration to ensure that the power of AI is harnessed to address the nation’s great challenges, and that AI is developed safely and responsibly to protect Americans from harm and discrimination. Last month, the Biden-Harris Administration announced it had secured voluntary commitments from seven leading AI companies to manage the risks posed by the technology. Earlier this year, the Administration announced a commitment from several AI companies to participate in an independent, public evaluation of large language models (LLMs)—consistent with responsible disclosure principles—at DEF CON 2023. This exercise, which starts later this week and is the first-ever public assessment of multiple LLMs, will help advance safer, more secure and more transparent AI development.

In addition, the Biden-Harris Administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible AI innovation.

FACT SHEET: Biden Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI

Voluntary commitments – underscoring safety, security, and trust – mark a critical step toward developing responsible AI
 
Biden-Harris Administration will continue to take decisive action by developing an Executive Order and pursuing bipartisan legislation to keep Americans safe

Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have moved with urgency to seize the tremendous promise and manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety. As part of this commitment, President Biden is convening seven leading AI companies at the White House today – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to announce that the Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.   
 
Companies that are developing these emerging technologies have a responsibility to ensure their products are safe. To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.
 
These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI. As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe.
 
There is much more work underway. The Biden-Harris Administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.

The Biden Administration has secured voluntary commitments from seven top technology companies that they will undertake standards and procedures to responsibly develop AI (Artificial Intelligence) to insure safety, security, and trust © Karen Rubin/news-photos-features.com

In remarks announcing the commitments, President Biden said, “We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years.  That has been an astounding revelation to me, quite frankly.  Artificial intelligence is going to transform the lives of people around the world.
 
“The group here will be critical in shepherding that innovation with responsibility and safety by design to earn the trust of Americans.  And, quite frankly, as I met with world leaders, all the G7 is focusing on the same thing.
 
“Social media has shown us the harm that powerful technology can do without the right safeguards in place.
 
“And I’ve said at the State of the Union that Congress needs to pass bipartisan legislation to impose strict limits on personal data collection, ban targeted advertisements to kids, require companies to put health and safety first.
 
“But we must be clear-eyed and vigilant about the threats emerging — of emerging technologies that can pose — don’t have to, but can pose — to our democracy and our values.  
 
“Americans are seeing how advanced artificial intelligence and the pace of innovation have the power to disrupt jobs and industries.
 
“These commitments — these commitments are a promising step, but the — we have a lot more work to do together. 

“Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight.”
 
These seven leading AI companies are committing to:
 
Ensuring Products are Safe Before Introducing Them to the Public

  • The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.
  • The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.

Building Systems that Put Security First

  • The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.
  • The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly.

Earning the Public’s Trust

  • The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.
  • The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.
  • The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.   
  • The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.

As we advance this agenda at home, the Administration will work with allies and partners to establish a strong international framework to govern the development and use of AI. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The United States seeks to ensure that these commitments support and complement Japan’s leadership of the G-7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom’s leadership in hosting a Summit on AI Safety, and India’s leadership as Chair of the Global Partnership on AI. 
 
This announcement is part of a broader commitment by the Biden-Harris Administration to ensure AI is developed safely and responsibly, and to protect Americans from harm and discrimination.

  • Earlier this month, Vice President Harris convened consumer protection, labor, and civil rights leaders to discuss risks related to AI and reaffirm the Biden-Harris Administration’s commitment to protecting the American public from harm and discrimination.
     
  • Last month, President Biden met with top experts and researchers in San Francisco as part of his commitment to seizing the opportunities and managing the risks posed by AI, building on the President’s ongoing engagement with leading AI experts.
     
  • In May, the President and Vice President convened the CEOs of four American companies at the forefront of AI innovation—Google, Anthropic, Microsoft, and OpenAI—to underscore their responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society. At the companies’ request, the White House hosted a subsequent meeting focused on cybersecurity threats and best practices.
     
  • The Biden-Harris Administration published a landmark Blueprint for an AI Bill of Rights to safeguard Americans’ rights and safety, and U.S. government agencies have ramped up their efforts to protect Americans from the risks posed by AI, including through preventing algorithmic bias in home valuation and leveraging existing enforcement authorities to protect people from unlawful bias, discrimination, and other harmful outcomes.
     
  • President Biden signed an Executive Order that directs federal agencies to root out bias in the design and use of new technologies, including AI, and to protect the public from algorithmic discrimination.
     
  • Earlier this year, the National Science Foundation announced a $140 million investment to establish seven new National AI Research Institutes, bringing the total to 25 institutions across the country.
     
  • The Biden-Harris Administration has also released a National AI R&D Strategic Plan to advance responsible AI.
     
  • The Office of Management and Budget will soon release draft policy guidance for federal agencies to ensure the development, procurement, and use of AI systems is centered around safeguarding the American people’s rights and safety.

Biden Administration Takes Steps to Promote Responsible Development of Artificial Intelligence-Before It’s Too Late

With so much concern raised about the explosive increase in use of artificial intelligence, the Biden-Harris Administration announced new actions that will further promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety. These steps build on the Administration’s strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal government’s ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities.

AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks. President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.

Vice President Harris and senior Administration officials met on May 4 with CEOs of four American companies at the forefront of AI innovation—Alphabet, Anthropic, Microsoft, and OpenAI—to underscore this responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society. The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues.

This effort builds on the considerable steps the Administration has taken to date to promote responsible innovation. These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year.

The Administration has also taken important actions to protect Americans in the AI age. In February, President Biden signed an Executive Order that directs federal agencies to root out bias in their design and use of new technologies, including AI, and to protect the public from algorithmic discrimination. Last week, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Department of Justice’s Civil Rights Division issued a joint statement underscoring their collective commitment to leverage their existing legal authorities to protect the American people from AI-related harms.

The Administration is also actively working to address the national security concerns raised by AI, especially in critical areas like cybersecurity, biosecurity, and safety. This includes enlisting the support of government cybersecurity experts from across the national security community to ensure leading AI companies have access to best practices, including protection of AI models and networks.

The administration’s announcements include:

  • New investments to power responsible American AI research and development (R&D). The National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes. This investment will bring the total number of Institutes to 25 across the country, and extend the network of organizations involved into nearly every state. These Institutes catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good. In addition to promoting responsible innovation, these Institutes bolster America’s AI R&D infrastructure and support the development of a diverse AI workforce. The new Institutes announced today will advance AI R&D to drive breakthroughs in critical areas, including climate, agriculture, energy, public health, education, and cybersecurity.
     
  • Public assessments of existing generative AI systems. The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles—on an evaluation platform developed by Scale AI—at the AI Village at DEFCON 31. This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.
     
  • Policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities. The Office of Management and Budget (OMB) is announcing that it will be releasing draft policy guidance on the use of AI systems by the U.S. government for public comment. This guidance will establish specific policies for federal departments and agencies to follow in order to ensure their development, procurement, and use of AI systems centers on safeguarding the American people’s rights and safety. It will also empower agencies to responsibly leverage AI to advance their missions and strengthen their ability to equitably serve Americans—and serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI. OMB will release this draft guidance for public comment this summer, so that it will benefit from input from advocates, civil society, industry, and other stakeholders before it is finalized.

FACT SHEET: Biden-Harris Administration Announces National Standards Strategy for Critical and Emerging Technology
 

The Biden-Harris Administration released the United States Government’s National Standards Strategy for Critical and Emerging Technology (Strategy), which will strengthen both the United States’ foundation to safeguard American consumers’ technology and U.S. leadership and competitiveness in international standards development.

Standards are the guidelines used to ensure the technology Americans routinely rely on is universally safe and interoperable. This Strategy will renew the United States’ rules-based approach to standards development. It also will emphasize the Federal Government’s support for international standards for critical and emerging technologies (CETs), which will help accelerate standards efforts led by the private sector to facilitate global markets, contribute to interoperability, and promote U.S. competitiveness and innovation.

The Strategy focuses on four key objectives that will prioritize CET standards development:

  • Investment: Technological contributions that flow from research and development are the driving force behind new standards. The Strategy will bolster investment in pre-standardization research to promote innovation, cutting-edge science, and translational research to drive U.S. leadership in international standards development. The Administration is also calling on the private sector, universities, and research institutions to make long-term investments in standards development.
     
  • Participation: Private sector and academic innovation fuels effective standards development, which is why it’s imperative that the United States to work closely with industry and the research community to remain ahead of the curve. The U.S. Government will engage with a broad range of private sector, academic, and other key stakeholders, including foreign partners, to address gaps and bolster U.S. participation in CET standards development activities.
     
  • Workforce: The number of standards organizations has grown rapidly over the past decade, particularly with respect to CETs, but the U.S. standards workforce has not kept pace. The U.S. Government will invest in educating and training stakeholders — including academia, industry, small- and medium-sized companies, and members of civil society — to more effectively contribute to technical standards development.
     
  • Integrity and Inclusivity: It is essential for the United States to ensure the standards development process is technically sound, independent, and responsive to broadly shared market and societal needs. The U.S. Government will harness the support of like-minded allies and partners around the world to promote the integrity of the international standards system to ensure that international standards are established on the basis of technical merit through fair processes that will promote broad participation from countries across the world and build inclusive growth for all.

Putting the Strategy into Practice

The U.S. private sector leads standards activities globally, through standard development organizations (SDOs), to respond to market demand, with substantial contributions from the U.S. Government, academia, and civil society groups. The American National Standards Institute (ANSI) coordinates the U.S. private sector standards activities, while the National Institute of Standards and Technology (NIST) coordinates Federal Government engagement in standards activities. Industry associations, consortia, and other private sector groups work together within this system to develop standards to solve specific challenges. To date, this approach has fostered an effective and innovative standards system that has supercharged economic growth and worked for people of all nations.

The CHIPS and Science Act of 2022 (Pub. L. 117–167) provided $52.7 billion for American semiconductor research, development, manufacturing, and workforce development. The legislation also codifies NIST’s role in leading information exchange and coordination among Federal agencies and communication from the Federal Government to the U.S. private sector. This engagement, coupled with the CHIPS and Science Act’s investments in pre-standardization research, will drive U.S. influence and leadership in international standards development. NIST provides a portal with resources and standards information to government, academia, and the public; updates on the U.S. Government’s implementation efforts for the Strategy will also be posted to that portal.

The United States Government has already made significant commitments to leading and coordinating international efforts outlined in the Strategy.  The United States has joined like-minded partners in the International Standards Cooperation Network, which serves as a mechanism to connect government stakeholders with international counterparts for inter-governmental cooperation.  Additionally, the U.S.-EU Trade and Technology Council launched a Strategic Standardization Information mechanism to enable transatlantic information sharing. 
  
Many U.S. Government agencies have already demonstrated their commitment to the Strategy through their actions and partnerships. Examples include: 

  • The National Science Foundation has updated its proposal and award policies and procedures to incentivize participation in standards development activities. 
     
  • The Department of State, NIST, the Department of Commerce, the Federal Communications Commission (FCC), the National Security Agency (NSA), the Office of the U.S. Trade Representative, USAID and other agencies engage in multilateral fora, such as the International Telecommunication Union, the Quad, the U.S.-EU Trade and Technology Council, the G7, and the Asia-Pacific Economic Cooperation, to share information on standards and CETs.
     
  • The National Telecommunications and Information Administration (NTIA) administers the Public Wireless Supply Chain Innovation Fund, a $1.5 billion grant program funded by the CHIPS and Science Act of 2022 that aims to catalyze the research, development, and adoption of open, interoperable, and standards-based networks. 
     
  • The Department of Defense engages with ANSI and the private sector in collaborative standards activities such as Global Supply Chain Security for Microelectronics and the Additive Manufacturing Standards Roadmap, as well as with the Alliance for Telecommunications Industry Solutions and the 3rd Generation Partnership Project (3GPP).
     
  • The United States Agency for International Development and ANSI work together through a public-private partnership to support the capacity of developing countries in areas of standards development, conformity assessment, and private sector engagement.
     
  • The Environmental Protection Agency SmartWay program works closely with the International Organization for Standardization (ISO) to standardize greenhouse gas accounting for freight and passenger transportation, providing a global framework for credible, accurate calculation and evaluation of transportation-related climate pollutants.
     
  • NTIA, NIST, and the FCC coordinate U.S. Government participation in 3GPP and work with the Alliance for Telecommunications Industry Solutions to ensure participation by international standards delegates at North American-hosted 3GPP meetings.
     
  • The FCC’s newly established Office of International Affairs is managing efforts across the FCC to ensure expert participation in international standards activities, such as 3GPP and the Internet Engineering Task Force, in order to promote U.S. leadership in 5G and other next-generation technologies.
     
  • The Department of Transportation supports development of voluntary consensus technical standards via multiple cooperative efforts with U.S.-domiciled and international SDOs.
     
  • The U.S. Department of Energy (DOE), though partnerships with the private sector and the contributions of technical experts at DOE and its 17 National Laboratories, contributes to standards efforts in multiple areas ranging from hydrogen and energy storage to biotechnology and high-performance computing.
     
  • The Department of the Treasury’s Office of Financial Research leads and contributes to financial data standards development work for digital identity, digital assets, and distributed ledger technology in ISO and ANSI.

The actions laid out in the Strategy align with principles set forth in the National Security Strategythe National Cybersecurity Strategy, and ANSI’s United States Standards Strategy, and will not only protect the integrity of standards development, but will ensure the long-term success of the United States’ innovation.