Meta’s A.I. Assistant Is Fun to Use, but It Can’t Be Trusted

In the last few days, you may have noticed something new inside Meta’s apps, including Instagram, Messenger and WhatsApp: an artificially intelligent chatbot.

Within those apps, you can chat with Meta AI and type in questions and requests like “What’s the weather this week in New York?” or “Write a poem about two dogs living in San Francisco.” The assistant will come up with responses immediately, such as “The corgi was short, with a butt so wide, the lab was tall, with a tongue that would glide.” You can also instruct Meta AI to produce pictures — like an illustration of a family watching fireworks.

This is Meta’s response to OpenAI’s ChatGPT, the chatbot that upended the tech industry in 2022, and similar bots including Google’s Gemini and Microsoft’s Bing AI. The Meta bot’s image generator also competes with A.I. imaging tools like Adobe’s Firefly, Midjourney and DALL-E.

Unlike other chatbots and image generators, Meta’s A.I. assistant is a free tool baked into apps that billions of people use every day, making it the most aggressive push yet from a big tech company to bring this flavor of artificial intelligence — known as generative A.I. — to the mainstream.

“We believe Meta AI is now the most intelligent AI assistant that you can freely use,” Mark Zuckerberg, the company’s chief executive, wrote on Instagram on Thursday.

The new bot invites you to “ask Meta AI anything” — but my advice, after testing it for six days, is to approach it with caution. It makes lots of mistakes when you treat it as a search engine. For now, you can have some fun: Its image generator can be a clever way to express yourself when chatting with friends.

A Meta spokeswoman said that because the technology was new, it might not always return accurate responses, similar to other A.I. systems. There is currently no way to turn off Meta AI inside the apps.

Here’s what doesn’t work well — and what does — in Meta’s AI.

Meta announced its chatbot as a replacement for web search. By typing queries for Meta AI into the search bar at the top of Messenger or Instagram, a group of friends planning a trip could look up flights while chatting, the company said.

I’ll be blunt: Don’t do this. Meta AI fails spectacularly at basic search queries like looking up recipes, airfares and weekend activities.

In response to my request to look up flights from New York to Colorado, the chatbot listed instructions on how to take public transportation from the Denver airport to downtown. And when I asked for flights from Oakland, Calif., to Puerto Vallarta in Mexico, the bot listed flights departing from Seattle, San Francisco and Los Angeles.

When I asked Meta AI to look up a recipe for baking Japanese milk bread, the bot produced a generic bread recipe that skipped the most important step: tangzhong, the technique that involves cooking flour and milk into a paste.

The A.I. also made up other basic information. When I asked it for suggestions for a romantic weekend in Oakland, its list included a fictional business. And when I asked it to tell me about myself — Brian Chen the journalist — it said I worked at The New York Times but incorrectly mentioned a tech blog I’ve never written for, The Verge.

Bing AI and Gemini, which are hooked directly into the Microsoft and Google search engines, did better at these types of search tasks, but clicking on a link through an old-fashioned web search is still more efficient.

A.I. chatbots work by looking for patterns in how words are used together, similar to the predictive text systems on our phones that suggest words to complete a sentence. All of them have struggled with numbers.

Unsurprisingly, Meta’s assistant stinks at counting. When you ask it for a five-syllable word starting with the letter w, it will respond with “wonderfully,” which has four syllables. When you ask it for a four-syllable word starting with w, it will offer “wonderful,” which has three syllables. Gemini and ChatGPT also fail at these tests.

Like other chatbots, Meta’s performed better the more information you gave it.

It excelled at editing existing paragraphs. For example, when I fed Meta AI paragraphs that felt verbose and asked for the paragraph to be tightened, the chatbot trimmed all the unnecessary words. When I asked it to improve a sentence written in passive voice, the bot rewrote it in active voice and added more context. When I asked it to remove jargon from a paragraph written by a tech blog, it rewrote highly technical terms in plain language.

Because Meta AI is better when it works with existing text, it can be helpful for studying. For instance, if you’re taking a history class and studying World War II, you can paste a website with information about the war into the search bar and then ask the bot to quiz you. The chatbot will read the information on the website and generate a multiple-choice test.

The most compelling aspect of Meta AI is its ability to generate images by typing “/imagine” followed by a description of the desired image. For instance, “/imagine a photograph of a cat sleeping on a window sill” will produce a convincing image in a few seconds:

Meta’s A.I. is much faster than other image generators like Midjourney, which can take more than a minute. The results can be very weird — images of people occasionally lacked limbs or looked cross-eyed.

Ethics experts have raised concerns about the implications of generating fake images because they can contribute to the spread of misinformation online. But in the context of using A.I. while chatting with friends and family in WhatsApp and Messenger, Meta AI is a positive example of how generating fake images can be fun — and safe — if we treat it as a new form of emoji.

In a group conversation with my in-laws, I mentioned I was shopping for a robust baby stroller that could withstand the crooked roads of my neighborhood. In seconds, my wife used Meta AI to generate an image of a stroller with enormous wheels that made it resemble a monster truck, stamped with a helpful label that said, “Imagined with AI.”


Source link