Hey, Alexa, what should students be learning about AI?

Hey, Alexa, what should students be learning about AI?


Rohit Prasad, a senior Amazon executive, had an important message for the ninth- and 10th-grade students at Dearborn STEM Academy, a public school in Boston’s Roxbury neighborhood.

He came to the school on a recent morning to observe an Amazon-sponsored lesson in artificial intelligence, which teaches students how to program simple tasks for Alexa, Amazon’s voice-activated virtual assistant. And he assured dear students that soon there will be lakhs of new jobs in AI

“We need to create talent for the next generation,” said Mr. Prasad, the Alexa’s Chief Scientist, told the class. “So we are educating about AI at the grassroots level as soon as possible.”

a few miles away, Sally KornbluthThe president of the Massachusetts Institute of Technology was delivering a more serious message about AI to students from local schools who gathered at Boston’s Kennedy Library campus for a workshop on AI risks and regulation.

“Because AI is such a powerful new technology, it really needs some rules to function well in society,” Dr. Kornbluth said. “We have to make sure that what it doesn’t do is cause harm.”

The events of the same day – an encouraging work in artificial intelligence and another warning against hastily deploying the technology – reflect the larger debate currently underway in the United States over the promise and potential peril of AI.

Both student workshops were organized by an MIT initiative on “responsible AI”, whose donors include Amazon, Google and Microsoft. And they underscored a question that has troubled school districts across the country this year: how should schools prepare students to navigate a world in which, according to some leading AI developers, AI-driven Appliance dominance seems inevitable?

Teaching AI in schools is nothing new. Courses such as computer science and civics now routinely include exercises on the social implications of facial recognition and other automated systems.

But the push for AI education grew even more this year after news about ChatGPT — a novel chatbot that can create human-like homework essays and occasionally fabricate misinformation — began to spread to schools.

Now, “AI literacy” is a new education buzz phrase. Schools are scrambling for resources to help teach it. Some universities, tech companies, and non-profits are responding with prepackaged curriculum.

The lessons are popping up even as schools grapple with a fundamental question: Should they be teaching students to program and use AI tools, the technical skills employers should be providing training for? Or should students learn to anticipate and mitigate the pitfalls of AI?

Cynthia Brazilea professor at MIT who directs university initiatives Responsible AI for Social Empowerment and Educationsaid their program aims to help schools do both.

“We want students to be exposed to these technologies as informed, responsible users and informed, responsible designers,” said Dr. Brazile, whose group organizes AI workshops for schools. “We want to raise them to be informed, responsible citizens about these rapid developments in AI and the many ways it is affecting our personal and professional lives.”

(Disclosure: I was recently a fellow in the Knight Science Journalism Program at MIT)

Other education experts say schools should also encourage students to consider the broader ecosystems in which AI systems operate. This may involve students researching the business models behind new technologies or examining how AI tools exploit user data.

“If we’re going to engage students in learning about these new systems, we really have to think about the context of these new systems,” said jennifer higgs, assistant professor of learning and mind science at the University of California, Davis. But often, she said, “that piece is still missing.”

Workshops in Boston Dr. She was part of the “Days of AI” event organized by the program in Brazil, which attracted many thousands of students around the world. It offered a glimpse into the different approaches schools are taking to AI education.

In Dearborn STEM, the company’s computer science education program, Hillah Barbot, a senior product manager for Amazon Future Engineer, led a lesson in voice AI for students. The lessons were developed by MIT in conjunction with the Amazon program, which provides coding courses and other programs for K-12 schools. The company provided a grant of over $2 million to MIT for the project.

First, Ms. Barbot voiced some AI lingo. She taught the students about “utterances”, phrases that consumers can say to Alexa to respond.

Then the students planned simple tasks for Alexa, such as telling jokes. Jada Reed, a ninth grade student, programmed Alexa to answer questions about Japanese manga characters. “I think it’s really cool that you can train it to do different things,” she said.

Dr. Brazile said it is important for students to have access to professional software tools from leading technology companies. “We’re giving them future-proof skills and perspectives on how they can work with AI to do the things they care about,” she said.

Some Dearborn students, who already built and programmed robots at school, said they appreciated learning how to code a different technology: voice-activated helpbots. Alexa uses a range of AI technologies including automatic speech recognition.

At least some students also said they have privacy and other concerns about the AI-assisted devices.

Amazon records consumers’ conversations with its Echo speakers after a person says a “wake word” such as “Alexa.” Amazon can use your interactions with Alexa unless users opt out target them with ads or use their voice recording train its AI model, Last week, Amazon agreed to pay $25 million to settle federal allegations that it kept voice recordings of children indefinitely, in violation of a federal online children’s privacy law. The company said it disputed the allegations and denied it violated the law. The company said customers can review and delete their Alexa voice recordings.

But the one-hour workshop led by Amazon had no impact on the company’s data practices.

Dearborn STEM students regularly investigate technology. Several years ago, the school launched a course in which students used AI tools to create deepfake videos — that is, false content — and check the results. And the students had thoughts about the virtual assistant they were learning to program that morning.

“Did you know there’s a conspiracy theory that Alexa listens to your conversations in order to show you ads?” asked a ninth grader named Ebony Maxwell.

“I’m not afraid to hear it,” said ninth-grader Lania Sanders. Still, Ms. Sanders said she refrained from using voice assistants because “I want to do it myself.”

A few miles away at the Edward M. Kennedy Institute for the United States Senate, an education center that houses a full-scale replica of the U.S. Senate chamber, dozens of students at the Warren Prescott School in Charlestown, Mass., were making a different discovery. Subject: AI Policy and Security Regulations.

Taking on the roles of senators from various states, middle school students participated in a mock hearing in which they debated provisions of the fictional AI protections bill.

Some students wanted to ban companies and police departments from using AI to target people based on data such as their race or ethnicity. Others wanted schools and hospitals to assess their fairness before deploying AI systems.

Exercise was not unfamiliar to middle school students. Nancy Arsenault, an English and civics teacher at Warren Prescott, said she often asks her students to consider how digital tools have affected them and the people they care about.

“As much as students love technology, they are well aware that unfettered AI is not something they want,” she said. “They want to see limits.”



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

53 + = 63