It's unclear to what extent tech companies will sign up to the voluntary guidelines and exactly how they will fit into the EU's AI Act. But there are signs that the EU and US approaches should be interoperable
The United States has published its response to the EU's artificial intelligence law, a voluntary set of recommendations aimed at making companies more responsible in how they develop AI systems.
The National Institute of Standards and Technology (NIST) released itsAI risk management frameworklast week, but it's unclear how strictly big tech companies will adopt the guidelines and how the recommendations will fit into Brussels' artificial intelligence legislation.
"We believe this voluntary framework will help develop and deploy AI in ways that enable organizations, in the US and other nations, to improve AI reliability while managing risk based on our democratic values."Dom Graves said, Undersecretary of Commerce, at the launch in Washington DC.
Home to many of the world's top AI companies like OpenAI and Google, the United States has no plans for binding legislation like the EU. Instead, in 2020, Congress tasked NIST, which has traditionally focused on coding scientific standards and measurements, to create a kind of cradle sheet for companies to follow when developing AI systems.
Since then, there have been extraordinary advances in AI progress, with many associated risks. Millions of users have adopted OpenAI's ChatGPT for educational, poetic and positive purposes, but few have succeeded.engage the chatbotin providing instructions for creating Molotov cocktails and crystal meth.
After 15 months of work and hundreds of submissions from companies, universities, and civil society, NIST has finally released version 1.0 of its AI risk framework.
It contains some recommendations that, if followed, would transform the workforce of tech companies and allow third parties to play a much bigger role in building AI systems.
Domain experts, users and “affected communities” should be “consulted” when assessing the impact of AI systems “as needed”, he suggests.
Companies building AI systems must also be diverse, not just ethnically but also in terms of disciplinary background, knowledge and experience, in order to spot problems that a more homogeneous team might miss.
And the framework requires a lot of documentation when building AI systems, including a record of the expected impact of AI tools, not just on business and users, but on society at large and the planet.
Unlike the EU bill, there are no uses of AI that are prohibited. And the risk companies are willing to take when implementing artificial intelligence systems is up to them. “While the AI risk management framework can be used to prioritize risk, it does not prescribe risk tolerance,” he says.
The question now is whether companies really take NIST's insights into account.kush varshney, who leads IBM's machine learning group, gave the framework a modest endorsement at launch. He said it would be "very helpful" in driving the company's research and innovation in "important directions for industry, government and society at large".
A spokeswoman for DeepMind, a leading AI lab owned by Alphabet, Google's parent company, said it is "reviewing the content that NIST publishes and shares with our internal teams" and would share its own case studies with the NIST resource center. Although DeepMind is based in the UK, it does notapply AI to power Google products.
“The NIST AI framework is something we look forward to seeing implemented […]
Many large companies already have risk management frameworks in place, he said. Instead, NIST's insights could be useful for small and medium-sized companies that lack the resources to develop their own risk-checking procedures, he suggested.
While the framework does not have the force of law, the hope is that companies will adopt it to limit their liability if they are sued due to an artificial intelligence system malfunction. And companies can start using it now, while the EU's AI law could have more years of wrangling in Brussels before it takes effect.
But what adopting the framework means in practice is slippery, as NIST itself has encouraged companies to modify and adapt their recommendations based on the type of AI tools they create.
Wearing it “could mean a lot of things,” Gutiérrez said. "It could mean they take a piece, it could mean they take it all." There is no way for third parties to verify that they are being followed, he warned.
Marc Rotenberg, president of the Center for AI and Digital Policy, a Washington, DC-based think tank, called the NIST framework "an excellent resource for organizations prior to implementing AI systems."
But it is not a substitute for a legal framework "to ensure the effective allocation of rights and responsibilities", he said.
Another question is how the NIST guidelines will combine with the upcoming EU AI Act. Companies may need to comply with NIST recommendations to reduce their US legal liabilities and comply with EU law to avoid hefty fines from Brussels.
But Gutiérrez sees the possibility of the two working together. Draft EU AI law stipulates that companies need a risk management framework to assess the dangers of deployment, and companies can follow NIST's recommendations to check that box, he said. “It would be a good way to complement each other,” he said.
In a sign that it is working towards interoperability, NISTpublished guidanceabout how the terms in its structure match those of the EU AI Act, as well as other AI governance tools.
The US and EU are collaborating on AI through the Trade and Technology Council, a regular meeting of senior officials. At their last meeting in December, Washington and Brussels announced a "joint roadmap" to define key AI terms and common metrics for measuring AI reliability. That doesn't mean they'll regulate technology in the same way, but a common terminology can help companies better navigate laws and guidelines on both sides of the Atlantic.
And last week Brussels and WashingtonAnnouncedthey would jointly conduct research on AI to address global challenges including weather forecasting, power grid optimization and emergency response management.
“We are hopeful for a transatlantic approach to risk management,” said Alexandra Belias, director of international public policy at DeepMind. “We hope to exchange best practices through this medium”, referring to the joint roadmap.
bill of rights
There is also confusion about how the NIST guidelines will work alongside the US so-called "Bill of Rights".released by the White House Office of Science and Technology Policy (OSTP) last year.
Despite the name, these recommendations are also non-binding. They seek to create a set of principles to protect the public from discriminatory algorithms and opaque AI decisions, among other issues.
But the bill has met with resistance in Washington as Republicans have taken key positions in scientific scrutiny after their election victories last year. Earlier this month, two high-ranking Republican lawmakers criticized the OSTP for sending "mixed messages" about US AI policy, demanding answers about how the bill was created in apublic letter. One of the lawmakers is Frank Lucas, the new chairman of the House Science, Space and Technology Committee.
They worry that the AI Bill of Rights will intrude on the work NIST has just completed, and they seem concerned that it could slow down American business and undermine American technology leadership. They also demanded that the OSTP disclose whether the bill should be the basis for a bill.
“It is vital to our economic and national security that the United States maintain its leadership in responsible AI research, development and standards,” they said.
But neither Rotenberg nor Gutierrez see any conflict between the AI Bill of Rights and the NIST framework. NIST's job is to provide guidance to companies, Rotenberg said, while the bill tries to protect those who are subject to AI-based decisions.
The lawmakers' letter is "counterproductive and ignores the real issues, addressed by OSTP, that are widely known in the AI community," he said.