HomeBusinessAI increases the risks of April Fools Achi-News

AI increases the risks of April Fools Achi-News

- Advertisement -

Achi news desk-

Rebranded as “Voltswagen.” The closing of Trader Joe’s. Email confirmation of food delivery $750.

The variety of April Fool’s Day marketing ideas gone wrong is as varied as their reception. Met with everything from smiles and social media shares to confusion, disdain or even fury and falling stocks, the weak promotional tactic represents a risk that can endear customers to a brand as quickly as it can sour them on it .

“One person’s humor is another person’s crime,” says Vivek Astvansh, professor of marketing at McGill University.

As April 1 approaches, consumers would be wise to extend even more scepticism, with experts saying artificial intelligence is increasing the potential for high-tech promotional schemes. Whether through generative text-to-video tools that create rich scenes from broken instructions or chatbots that offer endless advertising ideas on command, AI is raising new questions about authenticity and could differentiate between jokes, facts and deep fakes are even harder.

“In the next few days, we’ll see a lot of ads driven by GPT-4 or other generative AI tools,” Astvansh said in reference to the most current version of OpenAI’s popular ChatGPT program.

Even before the AI ​​advances of the past 16 months – OpenAI launched ChatGPT in November 2022 – the technology’s power to surpass human capability has played a role in corporate hijinks.

On April 1, 2019, Google announced that it had discovered how to communicate with tulips in their own language, “Tulipian.” It offered translation between the petals of the perennial and dozens of human dialects, noting “major advances in artificial intelligence.” The video ended by stating that Google Tulip would only be available that day, leaving little doubt about the joke.

But the misunderstandings of the past suggest that some may remain in the future, aided by AI capabilities.

In the run-up to April 1, 2021, Volkswagen AG issued a news release stating that its American division would change its name to “Voltswagen.” Several news outlets reported the statement, despite some doubts about its authenticity. The confusion that greeted the announcement grew further when the company told reporters who asked if it was an April Fools’ prank that the car giant was dead serious – only to admit the stunt hours later.

The joke fell flat like an old tire in the wake of the Volkswagen “diesel dupe” scandal several years earlier, when US authorities discovered that the company had installed software on more than half a million cars that enabled them to cheat on diesel emissions tests .

Other returning April Fools’ Day incidents include when Yahoo News mistakenly reported in 2016 that Trader Joe’s would close all 457 of its stores in less than a year, and when the food delivery company sent an on- British line Deliveroo sent fake confirmation emails to its customers in 2021 for orders of $750, causing thousands to think their accounts had been hacked.

Now, the ready accessibility and low user cost of many AI tools opens the door to more companies using the technology—including for April Fools’ fun that could go sideways.

“GPT-4 can create multiple ad campaign content at once, which could be video or could be still images. And then within a very short period of time and with very little expenditure or investment, the in-house advertising team or the marketing team can sift through the outputs that GPT-4 would have produced,” Astvansh said. All that’s left is to choose one, customize it with edits and post it.

To guard against fraud, Astvansh said disclosure of the methods and intentions would be key, especially on April 1.

“I hope they state or they put some information in their content that the seed idea or the seed content was created by a generative AI tool,” he said.

Digital watermarking – incorporating a pattern into AI-generated content to help users distinguish between real and fake images and identify their ownership – is one such disclosure method.

“Basically, it’s about making sure that the images or videos that are produced by these platforms are tagged in a way that when they then appear on the internet, labels get connect them so … consumers know what they see is AI ,” said Sam Andrey, managing director of the Dais, a public policy think tank at Toronto Metropolitan University.

The technology’s potential for deception is already well established. Witness the scams that use the voice of loved ones to convince their partner to hand over money to fraudsters, or recent robocalls that impersonate prominent political figures. Combine those with sophisticated images or digitally generated characters and the result is the potential for fraud on a massive scale, including by corporate actors.

“Even just a year ago it was more cartoonish,” Andrey says of AI-generated graphics.

“If it produces harmless, normal media and lowers production costs, that’s less of a concern,” – for example, if AI had been applied to Tim Hortons’ square-shaped Timbits, Ikea’s Canadian or black meatball vending machines in a Canadian Jeep flannel. “keeps you as cozy as jack lumber in the Canadian wilderness.” All were April Fools’ Day pranks last year.

“But we shouldn’t be using AI to trick people,” Andrey said.


This report was first published by The Canadian Press on March 30, 2024.

spot_img
RELATED ARTICLES

Most Popular