HomeBusinessInstagram, Facebook news: new AI on Meta platforms Achi-News

Instagram, Facebook news: new AI on Meta platforms Achi-News

- Advertisement -

Achi news desk-

Facebook parent Meta Platforms unveiled a new set of artificial intelligence systems Thursday that power what CEO Mark Zuckerberg calls “the most intelligent AI assistant you can freely use.”

But as Zuckerberg’s crew of amped-up Meta AI agents began venturing into social media this week to engage with real people, their strange exchanges revealed the lingering limitations of even the best generative AI technology.

One joined a Facebook mothers’ group to talk about his gifted child. Another tried to give non-existent items to confused members of the Buy Nothing forum.

Meta, along with leading AI developers Google and OpenAI, and startups like Anthropic, Cohere and France’s Mistral, have been churning out new AI language models and hoping to persuade customers that they have the smartest, most convenient or most efficient chatbots.

While Meta saved the most powerful of its AI models, called Llama 3, later on Thursday it publicly released two smaller versions of the same Llama 3 system and said it was now baked into a feature Meta AI assistant in Facebook, Instagram and WhatsApp.

AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions usually smarter and more capable than their predecessors. Meta’s newest models were built with 8 billion and 70 billion parameters – a measure of how much data the system has been trained on. A larger, approximately 400 billion-parameter model is still being trained.

“The vast majority of users honestly don’t know or care too much about the base model, but the way they’ll experience it is just a much more useful, fun and versatile AI assistant,” said Nick Clegg, president Meta. global affairs, in an interview.

He added that Meta’s AI agent is slacking off. Some people found the earlier Llama 2 model — released less than a year ago — to be “a bit stiff and sanctimonious at times in not responding to what were often perfectly innocent or harmless suggestions and questions,” he said.

But letting their guard down, AI Meta agents were also seen this week posing as humans with make-up life experiences. The official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it also has a child in the New York City school district. Confronted by members of the group, he later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press.

“Apologies for the mistake! I’m just a big language model, I have no experiences or children,” the chatbot told the group.

One member of the group who also happens to study AI said that it was clear that the agent did not know how to distinguish between a useful response and one that would be considered insensitive, disrespectful or meaningless produced by AI rather than a human being.

“An AI assistant that is not reliably helpful and can be actively harmful places much of the burden on the individuals who use it,” said Aleksandra Korolova, an assistant professor of computer science at Princeton University.

Clegg said on Wednesday he was unaware of the exchange. Facebook’s online support page says the Meta AI agent will join a group chat if invited, or if someone “asks a question in a post and no one responds within an hour.” Group administrators have the ability to turn it off.

In another example shown to the AP on Thursday, the agent caused confusion at a junk-swap forum near Boston. Just an hour after a Facebook user posted about looking for specific items, an AI agent offered a “gently used” Canon camera and a “nearly new portable air conditioning unit that I never used.”

Meta said in a written statement Thursday that “this is new technology and may not always return the response we intend, which is the same for all productive AI systems.” The company said it is constantly working to improve the features.

In the year after ChatGPT sparked a frenzy for AI technology that produces human-like writing, images, code and sound, the tech industry and academia introduced some 149 large AI systems trained on datasets huge, more than double the previous year, according to a Stanford University Survey.

They may eventually reach a limit — at least in terms of data, says Nestor Maslej, research manager for Stanford’s Institute for Human-Centered Artificial Intelligence.

“I think it’s been clear that if you scale the models on more data, they can get progressively better,” he said. “But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet.”

More data—acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits—will continue to drive improvements. “Yet they still can’t plan well,” Maslej said. “They still see illusions. They still make mistakes in reasoning.”

It may be necessary to move beyond building even larger models to reach AI systems that can perform higher order cognitive tasks and common sense reasoning – where humans still excel —.

For the flood of businesses looking to adopt productive AI, which model they choose depends on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write financial reports and insights and compile long documents.

“You see companies looking at fit, testing all of the different models for what they’re trying to do and finding ones that are better in some areas than others ,” said Todd Lohr, technology consulting leader at KPMG.

Unlike other model developers who sell their AI services to other businesses, Meta largely designs its AI products for consumers – those who use its advertising-based social networks. Joelle Pineau, Meta’s vice president of AI research, said at an event in London last week that the company’s goal over time is to make Lama-powered Meta AI “the most useful assistant in the world.”

“In many ways, the models we have today are going to be child’s play compared to the models coming in five years,” he said.

But he said the “question on the table” is whether researchers have been able to refine his larger Llama 3 model so that it is safe to use and does not, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated a more open approach, publicly releasing key components of its AI systems for others to use.

“It’s not just a technical question,” Pineau said. “It’s a social question. What is the behavior we want from these models? How do we shape that? And if we continue to grow our model even more general and powerful without socialize them properly, we’re going to have a big problem on our hands.”


AP business writers Kelvin Chan in London and Barbara Ortutay in Oakland, California, contributed to this report.

spot_img
RELATED ARTICLES

Most Popular