AI regulation is in its ‘early days’

AI regulation is in its 'early days'

Regulating artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and news conferences and the White House announcing on Friday voluntary AI security commitments by seven technology companies.

But a closer look at the activity raises questions about how meaningful the action is in setting policies around the rapidly evolving technology.

The answer is that it is not very meaningful right now. Lawmakers and policy experts said the United States is only at the beginning of what will likely be a long and difficult road toward creating AI regulations. While hearings have taken place at the White House with top tech officials, meetings have taken place and speeches have been made to introduce the AI ​​bill, it is too early to predict even the roughest sketches of rules to protect consumers and prevent the risks the technology could pose to jobs, the spread of disinformation and security.

“It’s still early days, and no one knows what the legislation will look like,” said Chris Lewis, president of the consumer group Public Knowledge, which has called for the creation of an independent agency to regulate AI and other tech companies.

The United States lags far behind Europe, where lawmakers are preparing to introduce an AI law later this year that would impose new restrictions on what are seen as the riskiest uses of the technology. In contrast, there remains much disagreement in the United States over the best way to handle the technology, which many American lawmakers are still trying to figure out.

Policy experts said it suited many tech companies. While some companies have said they welcome regulations related to AI, they have also argued against stricter regulations being created in Europe.

Here’s a rundown on the state of AI regulations in the United States.

The Biden administration has been on a fast-track listening tour with AI companies, academia and civil society groups. The effort began in May with Vice President Kamala Harris meeting at the White House with the CEOs of Microsoft, Google, OpenAI and Anthropic, where she implored the tech industry to take security more seriously.

On Friday, representatives from seven tech companies appeared at the White House to announce a set of principles to make their AI technologies safer, including third-party security checks and watermarking of AI-generated content to help prevent the spread of misinformation.

Many of the practices that were announced were already in place, or on the way to be implemented, at OpenAI, Google and Microsoft. They are not enforceable by law. The promises of self-regulation also turned out to be less than consumer groups expected.

“Voluntary commitments are not enough when it comes to Big Tech,” said Catriona Fitzgerald, deputy director of the Privacy Group, the Electronic Privacy Information Center. “Congress and federal regulators must put in place meaningful, enforceable guardrails to ensure that the use of AI is fair, transparent, and protects the privacy and civil rights of individuals.”

Last time, the White House presented a blueprint for an AI Bill of Rights, a set of guidelines on consumer protection with the technology. The guidelines are also not rules and are not enforceable. This week, White House officials said they were working on an executive order on AI, but did not disclose details and timing.

The loudest drumbeat on regulating AI has come from lawmakers, some of whom have introduced bills on the technology. His proposals include the creation of an agency to oversee AI, liability for AI technologies that spread misinformation, and licensing requirements for new AI tools.

Lawmakers have also held hearings about AI, including a hearing in May with Sam Altman, chief executive of OpenAI, which makes the ChatGPT chatbot. Some lawmakers voiced ideas during the hearing about other rules, including nutrition labels to inform consumers about AI risks.

The Bills are in their nascent stage and have not yet garnered the necessary support to take them forward. Last month, Senate Leader Chuck Schumer, Democrat of New York, announced a months-long process to craft AI legislation that included educational sessions for members.

“In many ways we are starting from zero, but I believe Congress is up to the challenge,” he said during a speech at the Center for Strategic and International Studies.

Regulatory agencies are starting to crack down on some of the issues arising from AI

Last week, the Federal Trade Commission launched an investigation into OpenAI’s ChatGPT, seeking information about how the company secures its systems and how the chatbot could potentially harm consumers through the creation of false information. FTC chair Leena Khan has said she believes the agency has enough power under consumer protection and competition laws to crack down on problematic behavior by AI companies.

“Waiting for congressional action is not ideal given the normal timelines for congressional action,” said Andres Sawicki, a law professor at the University of Miami.

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

+ 73 = 79