An industry insider offers an open alternative to Big Tech’s AI

An industry insider offers an open alternative to Big Tech's AI


Ali Farhadi is no technological rebel.

The 42-year-old computer scientist is a highly respected researcher, professor at the University of Washington, and founder of a start-up that was acquired by Apple, where he worked until four months ago.

But Mr. Farhadi, who became chief executive in July Allen Institute for AIis calling for “radical openness” to democratize research and development into a new wave of artificial intelligence, which many consider the most significant technology advancement in decades.

The Allen Institute has launched an ambitious initiative to create a freely available AI alternative to tech giants like Google and start-ups like OpenAI. In an industry process called open source, other researchers will be allowed to investigate and use this new system and the data it provides.

The stance taken by the Allen Institute, an influential non-profit research center in Seattle, puts it clearly on one side of a fierce debate over how open or closed new AI should be. Will opening up so-called generic AI, which powers chatbots like OpenAI’s ChatGPT and Google’s Bard, spur more innovation and opportunity? Or will it open a Pandora’s box of digital loss?

Definitions of what “open” means in the context of generative AI vary. Traditionally, software projects have open sourced the underlying “source” code for programs. Then anyone can look at the code, identify bugs, and make suggestions. There are rules governing whether changes can be made or not.

This is the reason behind popular open-source projects that are widely used linux Operating System, Apache Web Server and firefox Browser driven.

But generative AI technology involves much more than code. AI models are trained and refined after sifting through huge amounts of data.

However, experts warn that while the intentions may be good, the path the Allen Institute is taking is inherently risky.

“Decisions about the openness of AI systems are irreversible, and will likely be among the most consequential decisions of our time,” said Aviv Ovadya, a researcher at the Berkman Klein Center for Internet and Society at Harvard. He believes that international agreements are needed to determine which technology should not be released publicly.

Generative AI is powerful but often unpredictable. It can instantly compose emails, poems, and term papers, and answer any imaginable question with human fluency. But it also has a disturbing tendency to create things that researchers call “hallucinations.”

The major chatbot makers – Microsoft-backed OpenAI and Google – have kept their new technology under wraps, and have not disclosed how their AI models are trained and tuned. Notably, Google has a long history of publishing its research and sharing its AI software, but it has kept its technology to itself as Bard developed.

The companies say this approach reduces the risk that criminals hijack the technology to flood the Internet with misinformation and scams or engage in more dangerous behavior.

Proponents of open systems acknowledge the risks but say a better solution is to employ smarter people to deal with them.

When Meta released an AI model called LLaMA (Large Language Model Meta AI) this year, it caused a stir. Mr. Farhadi praised Meta’s move, but he doesn’t think it will go far.

“His attitude is basically: I made some magic. I’m not going to tell you what it is,” he said.

Mr Farhadi proposes to disclose the technical details of the AI ​​models, including the data they were trained on, the fine-tuning that was done and the tools used to evaluate their behaviour.

Allen Institute has taken the first step By releasing a huge data set For training AI models. It is composed of publicly available data from the web, books, academic journals, and computer code. The data set has been curated to remove personally identifiable information and toxic language such as racist and obscene phrases.

In editing, judgment calls are made. Would removing some language deemed toxic reduce a model’s ability to detect hate speech?

The Allen Institute’s data repository is the largest open data set currently available, Mr. Farhadi said. It has been downloaded more than 500,000 times since its release in August hugging faceA site for open-source AI resources and collaboration.

At the Allen Institute, the data set will be used to train and fine-tune A big generic AI programOLMO (Open Language Model), which will be released this year or early next year.

Large commercial AI models are “black box” technology, Mr. Farhadi said. “We’re pushing for a glass box,” he said. “Open the whole thing up, and then we can talk about behavior and partially explain what’s going on inside.”

Only a few core generative AI models of the size of the Allen Institute are openly available. These include Meta’s LLAMA and Falcon, a project backed by the Abu Dhabi government.

The Allen Institute seems like a logical home for a large AI project. “It is well-funded, but operates with academic values, and it has a history of helping advance open science and AI technology,” said Zachary Lipton, a computer scientist at Carnegie Mellon University.

The Allen Institute is working with others to advance its open approach. This year, non-profit Mozilla Foundation Invest $30 million in a start-up, mozilla.aiTo create open-source software that will initially focus on developing tools that surround open AI engines, such as the Allen Institute, to make them easier to use, monitor, and deploy.

The Mozilla Foundation, founded in 2003 to promote the Internet as a global resource open to all, is concerned about the further concentration of technology and economic power.

“A small group of players on the West Coast of the US are trying to take off the generative AI space before it’s really even gotten out of the gate,” said Mark Surman, president of the foundation.

Mr Farhadi and his team have spent time controlling the risks of their openness strategy. For example, they are working on ways to evaluate a model’s behavior in the training phase and then prevent certain actions, such as racial discrimination and creating biological weapons.

Mr. Farhadi considers the guardrails in large chatbot models to be Band-Aids that can be easily torn off by clever hackers. “My argument is that we should not allow that kind of knowledge to be incorporated into these models,” he said.

People will do bad things with this technology, Mr. Farhadi said, as they have done with all powerful technologies. He said society’s job is to better understand and manage risks. He argues that openness is the best option for achieving security and sharing economic opportunities.

“Regulation will not solve this on its own,” Mr Farhadi said.

The Allen Institute’s effort faces some formidable obstacles. A key point is that building and improving a large generator model requires a lot of computing firepower.

Mr Farhadi and his colleagues say emerging software technologies are more efficient. Nevertheless, he estimates that the Allen Institute initiative will require $1 billion worth of computing over the next few years. They have started trying to garner support from government agencies, private companies and tech philanthropists. But he refused to say whether he had lined up supporters or name them.

If he succeeds, the bigger test will be nurturing a sustainable community to support the project.

“Breaking into the big players requires an ecosystem of open players,” said Mr. Surman of the Mozilla Foundation. “And the challenge in a game like that is just patience and perseverance.”



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

+ 61 = 69