Former President Trump recently revealed in an interview with FOX Business’ Maria Bartiromo that Facebook CEO Mark Zuckerberg personally called him to apologize after a photo of Trump raising a fist following an assassination attempt was wrongly labeled as misinformation on the platform. Trump stated that Zuckerberg called him twice and expressed admiration for Trump’s response to the incident.
The photo in question, which showed Trump at a campaign rally in Butler, Pennsylvania, where an attempt on his life was made, was mistakenly flagged by Facebook’s AI detector tool as a doctored image. Meta Vice President of Global Policy Joel Kaplan acknowledged the error and explained that the tool incorrectly applied a fact-check label due to similarities between the altered and original photos.
While Zuckerberg apologized for the mistake, Meta clarified that the CEO has not endorsed any candidate in the 2024 presidential election. Additionally, Meta’s AI chatbot initially refused to provide information about the assassination attempt, leading to confusion among users.
In a similar incident, Google’s AI chatbot Gemini also declined to answer questions about the assassination attempt on Trump, citing restrictions on election-related queries. Google stated that users could access accurate information through its search results.
Both Meta and Google have faced criticism for their handling of the incident, with some questioning the transparency and accuracy of AI-driven responses in sensitive situations. The companies have since updated their protocols to prevent similar misinformation in the future.
Overall, the incident underscores the challenges of using AI technology to moderate content and provide information in real-time during breaking news events. As technology continues to play a significant role in shaping public discourse, ensuring accuracy and accountability in AI-driven responses is crucial to maintaining trust and credibility.