Executive order on AI seeks to balance the potential and danger of the technology

Executive order on AI seeks to balance the potential and danger of the technology


How do you regulate something that has the potential to both help and harm people, that affects every sector of the economy and that is changing so rapidly that even experts can’t keep up?

This has been the main challenge for governments when it comes to artificial intelligence.

Regulate AI too slowly and you may miss the opportunity to prevent potential threats and dangerous misuse of the technology.

React too fast and you risk writing bad or harmful rules, stifling innovation or ending up in a situation like the EU. It first released its AI Act in 2021, just before a wave of new generative AI tools arrived, rendering much of the Act obsolete. (The proposal, which has not yet been made law, was later rewritten to shoehorn in some new technology, but it’s still a little weird.)

On Monday, the White House announced its effort to regulate the fast-moving world of AI with a sweeping executive order that imposes new rules on companies and forces several federal agencies to begin putting guardrails around the technology. Gives instructions.

The Biden administration, like other governments, has been under pressure to do something about the technology since late last year, when ChatGPT and other generative AI apps burst into the public consciousness. AI companies are sending executives to testify before Congress and brief lawmakers about the promise and pitfalls of the technology, while activist groups have called on the federal government to investigate dangerous uses of AI, such as creating new cyber weapons and creating deceptive deepfakes. It has been requested to stop the construction.

Furthermore, a culture battle has raged in Silicon Valley, as some researchers and experts urge slowing down the AI ​​industry, and others insist on its full-blown acceleration.

President Biden’s executive order tries to find a middle path – imposing some modest rules while allowing AI development to continue largely unimpeded, and signaling that the federal government will continue to support AI development for years to come. Intends to keep a close eye on the AI ​​industry. Unlike social media, a technology that was allowed to grow unhindered for more than a decade before regulators showed any interest in it, it shows that the Biden administration has no intention of letting AI fly under the radar. Not intended.

full executive orderAt over 100 pages, it seems to have something for almost everyone.

The most concerned AI safety advocates – such as those who signed an open letter this year claiming that AI poses the same “extinction threat” as pandemics and nuclear weapons – will be happy that the order will protect against powerful A.I. Imposes new requirements on companies making systems. ,

Specifically, companies building the largest AI systems will be required to inform the government and share the results of their security testing before releasing their models to the public.

These reporting requirements will apply to models above a certain threshold of computing power — more than 100 septillion integer or floating-point operations, if you’re curious — which will likely include next-generation models developed by OpenAI, Google, and other big companies. . Developing AI technology.

These requirements would be enforced through the Defense Production Act, a 1950 law that gives the President broad authority to compel US companies to support efforts deemed critical to national security. This could strengthen the rules that were lacking in the administration’s previous voluntary AI commitments.

Additionally, the order will require cloud providers that rent computers to AI developers — a list that includes Microsoft, Google and Amazon — to tell the government about their foreign customers. And it directs the National Institute of Standards and Technology to come up with standardized tests to measure the performance and security of AI models.

The executive order also includes some provisions that will please the AI ​​ethics crowd – a group of activists and researchers who worry about near-term harms from AI, such as bias and discrimination, and who think that AI could lead to long-term extinction. The apprehensions are very high. ,

Specifically, the order directs federal agencies to take steps to prevent the use of AI algorithms to increase discrimination in housing, federal benefit programs, and the criminal justice system. And it directs the Commerce Department to come up with guidance on watermarking AI-generated content, which could help prevent the spread of AI-generated misinformation.

And what do the AI ​​companies who are the targets of these regulations think about them? Many executives I spoke to on Monday were relieved that the White House order eliminated the registration requirement for licenses to train large AI models, a proposed move that was criticized by some in the industry. It was criticized as a harsh step. This will not require them to remove any of their existing products from the market, or force them to disclose the types of information they want to keep private, such as the size of their models and how to train them. Methods used.

It also doesn’t try to curb the use of copyrighted data in training AI models – a common practice that has been attacked by artists and other creative workers in recent months and is being sued in courts.

And tech companies will benefit from the order’s efforts to loosen immigration restrictions and streamline the visa process for workers with particular expertise in AI as part of the national program.Increase in AI talent,

Of course, not everyone will be thrilled. Radical security activists might wish that the White House had imposed stricter limits around the use of large AI models, or that it had blocked the development of open-source models, whose code can be freely downloaded and used by anyone. Can be used by. And some ardent AI boosters may be upset that the government is doing anything to limit the development of a technology they consider mostly good.

But the executive order appears to strike a careful balance between practicality and caution, and absent congressional action to pass comprehensive AI regulations into law, it is likely to be the clearest guardrail we will get for the foreseeable future.

There will be other efforts to regulate AI – particularly in the EU, where the AI ​​Act could become law as soon as next yearAnd in Britain, where a Global leaders summit this week New efforts are expected to be made to rein in AI development.

The White House’s executive order is a signal that it intends to move quickly. The question, as always, is whether AI itself will advance faster.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

− 1 = 3