The European Union on Wednesday took a significant step toward passing one of the first major laws regulating artificial intelligence, a potential model for policymakers around the world as they look to guard against the fast-evolving technology. Struggling with how to install.
The European Parliament, a main legislative branch of the European Union, passed a draft law known as the AI Act, which would impose new restrictions on what are seen as the riskiest uses of the technology. It would severely curtail the use of facial recognition software, while requiring makers of AI systems like ChatGPT chatbots to disclose more about the data they use to build their programs.
Vote is one step in a long process. The final version of the law is not expected to be passed until later this year.
The European Union is further along than the United States and other large Western governments in regulating AI. The 27-nation bloc has debated the topic for more than two years, and the issue took on new urgency after ChatGPT was released last year, which exacerbated concerns. About the possible effects of technology on employment and society.
Policymakers everywhere from Washington to Beijing are now racing to regulate an emerging technology that’s worrying even some of its earliest creators. In the United States, the White House is issued policy considerations Including rules to test AI systems before they are publicly available and to protect privacy rights. In China, draft regulations unveiled in April would require makers of chatbots to comply with the country’s strict censorship rules. Beijing is also taking more control over how the makers of AI systems use data.
How effective any regulation of AI can be is unclear. As a sign that new capabilities of technology are emerging faster than lawmakers have been able to address them, earlier versions of EU legislation did not pay much attention to so-called generative AI systems such as ChatGPT, which generates text. , can output pictures and videos. Respond to prompts.
Under the latest version of Europe’s bill passed on Wednesday, generative AI will face new transparency requirements. This includes publishing summaries of copyrighted material used to train the system, a proposal supported by the publishing industry but opposed by its developers as technically infeasible. Makers of generative AI systems will also have to put in place safeguards to prevent them from generating illegal content.
Francine Bennett, acting director of the Ada Lovelace Institute, an organization in London that has pushed new AI laws, said the EU proposal was an “important milestone”.
“It’s hard to regulate fast-moving and increasingly reusable technology when even the companies building the technology aren’t entirely clear how things will go,” Ms Bennett said. “But it would certainly be bad for all of us to continue to operate without adequate regulation.”
The European bill takes a “risk-based” approach to regulating AI, focusing on applications with the greatest potential for human harm. This would include legal systems using AI systems to operate critical infrastructure such as water or energy, and when determining access to public services and government benefits. Manufacturers of the technology must conduct a risk assessment before putting the technology into daily use, similar to the drug approval process.
A tech industry group, the Computer and Communications Industry Association, said the EU should avoid overly broad regulations that stifle innovation.
“The EU is set to become a leader in regulating artificial intelligence, but whether it will lead AI innovation remains to be seen,” said Boniface de Champrice, the group’s Europe policy manager. “Europe’s new AI regulations need to effectively address clearly defined risks, while leaving enough flexibility for developers to deliver useful AI applications for the benefit of all Europeans.”
A major area of debate is the use of facial recognition. The European Parliament voted to ban the use of live facial recognition, but questions remain about whether exemptions should be allowed for national security and other law enforcement purposes.
Another provision would ban companies from scraping biometric data from social media to build a database, a practice that came under scrutiny after it was used by facial recognition company Clearview AI.
Tech leaders are trying to influence the debate. Sam Altman, chief executive of ChatGPT’s creator OpenAI, has visited in recent months with at least 100 US lawmakers and other global policy makers from South America, Europe, Africa and Asia, including European Commission President Ursula von der Leyen. Are included. , Mr Altman has called for regulation of AI, but also said the EU proposal could be prohibitively difficult to comply with.
Following Wednesday’s vote, a final version of the law will be negotiated by representatives of the EU’s three branches – the European Parliament, the European Commission and the Council of the European Union. Officials said they hoped to reach a final agreement by the end of the year.