How to guide AI chatbots to make them more useful

How to guide AI chatbots to make them more useful


Anyone can get attracted to AI-powered chatbots like ChatGPT and Bard – wow, they can write essays and recipes! – Eventually known as hallucinogens, the tendency of artificial intelligence to fabricate information.

Chatbots, which take guesses on what to say based on information sourced from across the internet, can’t help but get things wrong. And when they fail – for example, by publishing a cake recipe with horribly wrong flour measurements – it can be a real buzzword.

Yet as mainstream tech devices continue to integrate AI, it is important to know how to use it to serve us. After testing dozens of AI products over the past two months, I concluded that most of us are not using technology optimally, mainly because tech companies have given us poor directions.

Chatbots are at their least beneficial when we ask them questions and then expect whatever answers they give themselves to be true, which is how they were designed to be used. But when instructed to use information from trusted sources such as trusted websites and research papers, AI can perform assistive tasks with a high degree of accuracy.

“If you give them the right information, they can do interesting things with it,” said Sam Huettmacher, founder of AI start-up Context. “But by itself, 70 percent of what you get will not be accurate.”

With the simple change of advising chatbots to work with specific data, they produced sensible answers and useful advice. It turned me from a cynical AI skeptic to an avid power user over the past few months. When I went on a trip using ChatGPT’s planned itinerary, it worked well because the recommendations came from my favorite travel websites.

Directing chatbots to specific high-quality sources, such as the websites of well-established media outlets and academic publications, can also help reduce the production and spread of misinformation. I want to share some of the methods I used to get help with cooking, research, and travel planning.

Chatbots like ChatGPT and Bard can write recipes that sound great in theory but don’t work in practice. In an experiment by the New York Times food desk in November, an early AI model generated recipes for a Thanksgiving menu that included an extremely dry turkey and a dense cake.

I encountered disappointing results even with the AI-generated seafood recipes. But that changed when I experimented with ChatGPT plug-ins, which are essentially third-party apps that work with chatbots. (Only customers who pay $20 per month for access to ChatGPT4, the latest version of the chatbot, can use the plug-in, which can be activated in the Settings menu.)

On ChatGPT’s plug-ins menu, I selected Tasty Recipes, which pulls data from the Tasty website owned by Buzzfeed, a well-known media site. Then I asked the chatbot to create a meal plan using recipes from the site, including seafood dishes, ground pork, and vegetable sides. The bot submits an inspirational meal plan, which includes Lemongrass Pork Bun Meal, Grilled Tofu Tacos, and Everything Pasta in the Fridge; Each meal suggestion included a link to a recipe on Tasty.

For recipes from other publications, I used Link Reader, a plug-in that let me paste in a Web link to generate a meal plan using recipes from other trusted sites like Serious Eats. The chatbot pulled data from sites to create meal plans and asked me to visit websites to read recipes. It took extra effort, but it beat the AI-concocted meal plan.

When I did research for an article on a popular video game series, I turned to ChatGPT and Bard to refresh my memory by summarizing the storylines of past games. He messed around with important details about the games’ stories and characters.

After testing several other AI tools, I concluded that for research, it was important to focus on reliable sources and quickly double-check data for accuracy. I finally found a tool that provides this: Humata.ai, a free web app that has become popular among academic researchers and lawyers.

The app lets you upload documents, such as PDFs, and from there a chatbot answers your questions about the content with a copy of the document, highlighting relevant parts.

In a test, I uploaded a research paper I found to PubMed, the government-run search engine for scientific literature. The tool generated a relevant summary of a long document in minutes, a process that took me hours, and I glanced over highlights to double-check that the summaries were accurate.

Cyrus Khajwandi, founder of Humata based in Austin, Texas, developed the app when he was a researcher at Stanford and needed help reading in-depth scientific articles, he said. The problem with chatbots like ChatGPT, he said, is that they rely on older models of the web, so the data may lack relevant context.

When a Times travel writer recently asked ChatGPT to design an itinerary for Milan, the bot directed her to a tour of a central part of the city that was deserted due to, among other things, being an Italian holiday.

I was in luck when I requested a vacation itinerary for me, my wife, and our dogs in Mendocino County, California. As I did when planning meals, I asked ChatGPT to include suggestions from some of my favorite travel sites, such as Thrillist, which is owned by Vox, and The Times’ travel section.

Within minutes, the chatbot generated an itinerary that included dog-friendly restaurants and activities, including a visit to a farm with wine and cheese pairings and a train to a popular hiking trail. This didn’t take me several hours to plan, and most importantly, the dogs had a great time.

Google and OpenAI, which work closely with Microsoft, say they are working to reduce hallucinations in their chatbots, but we can already reap the benefits of AI by controlling the data that bots rely on to respond.

To put it another way: The main advantage of training machines with huge data sets is that they can now use language to simulate human reasoning, said Nathan Benaich, a venture capitalist who invests in AI companies. “The key step for us is to combine that capability with high quality information,” he said.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

7 + 2 =