Two recent Meta AI-related issues regarding content about Donald Trump were the result of technical mistakes and not bias, said the vice president of global policy for the parent company of Facebook, Instagram <a href="https://www.thenationalnews.com/future/technology/2024/07/03/threads-anniversary-meta-social-media/" target="_blank">and Threads</a>. “It's a known issue that AI chatbots, including Meta AI, are not always reliable when it comes to breaking news or returning information in real time,” said Meta’s Joel Kaplan. “We are constantly working to make our products better and will continue to quickly address any issues as they arise,” he added. Mr Kaplan was referring to a photograph of the former US president and 2024 Republican presidential nominee that Meta’s systems incorrectly flagged as digitally altered and manipulated, and several instances where Meta’s AI chatbot appeared to not respond to any inquiries about the <a href="https://www.thenationalnews.com/news/us/2024/07/30/encrypted-apps-stall-fbi-investigation-into-donald-trumps-would-be-assassin/" target="_blank">assassination attempt</a> on Mr Trump's life. Meta’s statement gave a rare insight into the company’s approach to handling potential misinformation on the platform, showing the difficulty of implementing those plans given the pace of breaking news, the imperfect nature of AI technology, and the <a href="https://www.thenationalnews.com/business/2023/10/17/why-ai-isnt-a-silver-bullet-for-all-problems/" target="_blank">uncertainty about how consumers will use it</a>. Mr Kaplan said AI chatbots largely rely on large amounts of data, making them less than ideal for handling a fluid breaking news event, also making it almost impossible to ensure the accuracy of all the data upon which the chatbots are trained. “Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened – and instead give a generic response about how it couldn’t provide any information,” he added, explaining that Meta had since updated the chatbot to some extent, while acknowledging that that it probably should have been done sooner. Mr Kaplan also explained what many consider to be an continuing flaw with chatbots when used to consume news about events similar to the Trump assassination attempt. “There is initially an enormous amount of confusion, conflicting information or outright conspiracy theories in the public domain,” he said. The lengthy post on Meta’s news blog also addressed a photo of Mr Trump taken shortly after the assassination attempt on July 13, which the company’s systems incorrectly applied a fact check label to. Meta said the photo of Mr Trump was incorrectly flagged as altered because of its resemblance to another photo which its systems had correctly identified as being doctored. “When a fact check label is applied, our technology detects content that is the same or almost exactly the same as those rated by fact checkers, and adds a label to that content as well,” the company said. “Given the similarities between the doctored photo and the original image – which are only subtly (although importantly) different – our systems incorrectly applied that fact check to the real photo, too.” In response, Mr Trump did not mince his words, appearing to disregard much of Meta's statement. “Facebook has just admitted that it wrongly censored the Trump 'attempted assassination photo',” he posted to his <i>Truth Social</i> platform. “They made it virtually impossible to find pictures or anything about this heinous act.” Meta is not alone in facing occasional problems of implementing AI tools while also ensuring accuracy amid a sea of <a href="https://www.thenationalnews.com/weekend/2023/10/20/fake-it-to-make-it-the-ai-threat-to-the-us-election/" target="_blank">misinformation and disinformation</a>. Earlier this year, Alphabet-owned Google was forced to <a href="https://www.thenationalnews.com/business/technology/2024/02/22/google-suspends-gemini-ai-image-generator-after-inaccurate-historical-images/" target="_blank">suspend the image generation tool</a> included in its AI platform Gemini, following criticism over its creation of historically inaccurate images. In 2022, shortly after deploying <a href="https://www.thenationalnews.com/business/technology/2022/08/11/metas-new-chatbot-claims-donald-trump-will-always-be-president/" target="_blank">its BlenderBot 3 chatbot</a>, Meta addressed various inaccurate statements. Media reports at the time said the bot claimed that Mr Trump “always will be” US president, among other inaccurate comments. While many Big Tech companies are taking steps to increase the disclosure for content created with AI, there is little consensus or government regulation on how it should be done. Microsoft, in a recent report, stressed the need for the entire industry and regulators to protect people from abusive and misleading AI content. “One of the most important things the US can do is pass a comprehensive deepfake fraud statute,” an <a href="https://blogs.microsoft.com/on-the-issues/2024/07/30/protecting-the-public-from-abusive-ai-generated-content/" target="_blank">introduction to the report read</a>. “We don't have all the solutions or perfect ones, but we want to contribute to and accelerate action.”