Biden to issue first rule on artificial intelligence systems

Biden to issue first rule on artificial intelligence systems


President Biden will issue an executive order on Monday outlining the federal government’s first rules on artificial intelligence systems. They include requirements that the most advanced AI products be tested to ensure they cannot be used to make biological or nuclear weapons, with the findings of those tests reported to the federal government.

The testing requirements are a small but central part of what Mr. Biden, in a speech scheduled for Monday afternoon, is planning to describe as the most comprehensive government action yet to protect Americans from potential risks posed by huge surges in AI in the past. Let’s hope for. many years.

The rules would include recommendations, but not requirements, that photos, videos and audio developed by such systems be watermarked to make it clear they were created by AI. It reflects growing fears that AI will make it too easy to create “deep fakes” and misleading disinformation, especially as the 2024 presidential campaign heats up.

The United States recently restricted the export of high-performance chips to China to slow its ability to produce so-called large language models, the accumulation of data that helps programs like ChatGPT answer questions and perform tasks. Has made it so effective in providing speed. Similarly, under the new rules, companies running cloud services will have to inform the government about their foreign customers.

Mr Biden’s order will be issued days before a gathering of world leaders on AI security hosted by British Prime Minister Rishi Sunak. On the issue of AI regulation, the United States is lagging behind the European Union, which is drafting new laws, and other countries such as China and Israel have issued proposals for regulations. Ever since ChatGPIT, an AI-powered chatbot, exploded in popularity last year, lawmakers and global regulators have been grappling with how artificial intelligence could replace jobs, spread misinformation and potentially supersede its own kind. Can develop.

White House Deputy Chief of Staff Bruce Reed said, “President Biden is taking the strongest action ever taken by any government in the world on AI safety, security and trust.” “This is the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and minimize the risks.”

The new US rules, some of which are due to come into effect in the next 90 days, are likely to face a number of challenges, some legal and some political. But the order is aimed at the most advanced future systems, and it largely does not address the immediate dangers of existing chatbots that could be used to spread misinformation related to Ukraine, Gaza or the presidential campaign.

The administration did not release the language of the executive order on Sunday, but officials said some steps in the order will require approval from independent agencies such as the Federal Trade Commission.

The order only affects American companies, but because software development occurs worldwide, the United States will face diplomatic challenges in enforcing the rules, which is why the administration is calling on allies and adversaries to develop similar rules. Trying to encourage. Vice President Kamala Harris is representing the United States at a conference on the topic in London this week.

The rules also aim to impact the technology sector by setting standards for safety, security and consumer protection for the first time. Using the power of its purse strings, White House directives to federal agencies aim to force companies to comply with standards set by their government customers.

“This is an important first step and the important thing is that the executive order sets the parameters,” said Lauren Kahn, a senior research analyst at the Center for Security and Emerging Technology at Georgetown University.

The order directs the Department of Health and Human Services and other agencies to create clear safety standards for the use of AI and streamline the system to make it easier to buy AI tools. It orders the Labor Department and the National Economic Council to study the impact of AI on the labor market and come up with potential regulations. And it calls on agencies to provide clear guidance to landlords, government contractors, and federal benefit programs to prevent discrimination from algorithms used in AI tools.

But the White House’s authority is limited, and some directives are not enforceable. For example, the order calls on agencies to strengthen internal guidelines to protect personal consumer data, but the White House also acknowledged the need for privacy legislation to fully ensure data security.

To encourage innovation and promote competition, the White House will request that the FTC expand its role as a watchdog over consumer protection and antitrust violations. But the White House does not have the authority to direct the FTC, an independent agency, to make rules.

Trade Commission Chairwoman Lina Khan has already signaled her intention to act more aggressively as an AI watchdog. In July, the Commission launched an investigation into OpenAI, the creator of ChatGPS, over allegations of potential consumer privacy violations and spreading misinformation about individuals.

“Although these tools are new, they are not exempt from existing regulations, and the FTC will vigorously enforce the laws we are charged with serving, even in this new market,” Ms. Khan wrote in a guest essay in The New York Times. Is.” May.

The tech industry has said it supports the regulations, although companies disagree on the level of government oversight. Microsoft, OpenAI, Google and Meta are among 15 companies that have agreed to voluntary safety and security commitments, including having their systems stress-tested by third parties for vulnerabilities.

Mr Biden has called for rules that support opportunities for AI to help medical and climate research, as well as create guardrails to protect against abuse. He has stressed the need to balance regulations with support for American companies in the global race for AI leadership. And to that end, the order directs agencies to streamline the visa process for highly skilled immigrants and non-immigrants with expertise in AI to study and work in the United States.

The central rules to protect national security will be outlined in a separate document, called a National Security Memorandum, to be drafted by next summer. Some of those rules will be public, but many are expected to remain classified – particularly those related to steps to prevent foreign countries, or non-state actors, from exploiting AI systems.

A senior Energy Department official said last week that the National Nuclear Security Administration has already begun exploring how these systems could speed nuclear proliferation by solving complex issues in making nuclear weapons. And many officials have focused on how these systems could enable a terrorist group to assemble what they need to make biological weapons.

Still, lawmakers and White House officials have cautioned against moving too quickly in writing legislation for rapidly changing AI technologies. The EU did not consider the larger language model in its first legislative draft.

“If you move too quickly into this, you could screw it up,” Senator Chuck Schumer, the majority leader and Democrat of New York, said last week.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

12 − = 5