There were two noteworthy issues related to the treatment of political content on our platforms in the past week – one involved a picture of former President Trump after the attempted assassination, which our systems incorrectly applied a fact check label to, and the other involved Meta AI responses about the shooting. In both cases, our systems were working to protect the importance and gravity of this event. And while neither was the result of bias, it was unfortunate and we understand why it could leave people with that impression. That is why we are constantly working to make our products better and will continue to quickly address any issues as they arise.
We’ve investigated and here is what we’ve found:
First, it’s a known issue that AI chatbots, including Meta AI, are not always reliable when it comes to breaking news or returning information in real time. In the simplest terms, the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained. This includes breaking news events – like the attempted assassination – when there is initially an enormous amount of confusion, conflicting information, or outright conspiracy theories in the public domain (including many obviously incorrect claims that the assassination attempt didn’t happen). Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened – and instead give a generic response about how it couldn’t provide any information. This is why some people reported our AI was refusing to talk about the event. We’ve since updated the responses that Meta AI is providing about the assassination attempt, but we should have done this sooner. In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn’t happen – which we are quickly working to address. These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward. Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we’ll continue to address these issues and improve these features as they evolve and more people share their feedback.
Second, we also experienced an issue related to the circulation of a doctored photo of former President Trump with his fist in the air, which made it look like the Secret Service agents were smiling. Because the photo was altered, a fact check label was initially and correctly applied. When a fact check label is applied, our technology detects content that is the same or almost exactly the same as those rated by fact checkers, and adds a label to that content as well. Given the similarities between the doctored photo and the original image – which are only subtly (although importantly) different – our systems incorrectly applied that fact check to the real photo, too. Our teams worked to quickly correct this mistake.
Both of these issues are being addressed. We’re committed to ensuring our platforms are a place where people can freely express themselves, and we are always working to make improvements.
The post Review of Fact-Checking Label and Meta AI Responses appeared first on Meta.
source https://about.fb.com/news/2024/07/review-of-fact-checking-label-and-meta-ai-responses/
0 Comments