HomeBusinessNYC defends AI chatbot that told people to break laws Achi-News

NYC defends AI chatbot that told people to break laws Achi-News

- Advertisement -

Achi news desk-

NEW YORK –

New York City Mayor Eric Adams is defending the city’s new artificial intelligence chatbot that has been caught in recent days giving business owners wrong answers or advice that, if followed, would violate the law.

When it was launched as a pilot in October, the MyCity chatbot was touted as the first city-wide use of such AI technology, something that would provide business owners with “actionable and trusted information” in response to queries and typing into an online portal.

That has not always proven the case: journalists at the investigative center Markup first reported last week that the chatbot was doing things wrong. He incorrectly advised that employers could take a cut of their employees’ tips, and that no regulations required bosses to give notice of changes to employees’ schedules.

“It’s wrong in some areas, and we have to fix it,” Adams, a Democrat, told reporters Tuesday, stressing that it was a pilot program. “Any time you use technology, you need to put it in the real environment to get the kinks out.”

Adams has been an ardent advocate of using unproven technology in the city with an optimism that is not always justified. He put an ambiguous 400-pound ovoid robot in the Times Square subway station last year that he hoped would help police stop crime; he retired about five months later, with commuters noting that he didn’t seem to do anything, and couldn’t use stairs.

The chatbot remained online on Thursday and was still sometimes giving incorrect answers. He said shopkeepers were free to go cashless, apparently oblivious to the city council’s 2020 law banning shops from refusing to accept cash. He still thinks the city’s minimum wage is US$15 an hour, even though it has been raised to $16 in 2024.

The chatbot, which relies on Microsoft’s Azure AI service, appears to be being led astray by problems common to generative AI technology platforms known as ChatGPT, which are known to sometimes make things right or claim falsehoods with HAL-like confidence.

Microsoft declined to say what might be causing the problems, but said in a statement that it is working with the city to fix them. The city’s Office of Technology and Innovation said in a statement that “as soon as next week, we expect to significantly mitigate incorrect answers.”

Neither Microsoft nor City Hall responded to questions about what caused the errors and how they could be fixed.

The city has updated disclaimers on the MyCity chatbot website, noting that “its responses may sometimes be inaccurate or incomplete” and telling businesses “not to use its responses as legal or professional advice.”

Andrew Rigie, who advocates for thousands of restaurant owners as director of the Hospitality Alliance of NYC, said he’s heard from business owners confused by the chatbot’s responses.

“I applaud the city for trying to use AI to help businesses, but it needs to work,” he said, warning that following some of the chatbot’s guidelines could have serious legal consequences. “If when I ask a question and then I have to go back to my lawyers to know if the answer is correct or not, it defeats the purpose.”

spot_img
RELATED ARTICLES

Most Popular