By: Aimee Picchi | CBS News
March 21, 2019
Facebook said its artificial intelligence tools failed to detect the 17-minute video that showed the terrorist attack in Christchurch, New Zealand, adding that AI is "not perfect."
In a blog post posted Wednesday, Facebook executive Guy Rosen, the company's vice president of integrity, wrote that Facebook's artificial intelligence tools rely on "training data" -- or thousands of examples of a particular type of content -- to discern whether content is problematic.
"This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems," Rosen wrote. "However, this particular video did not trigger our automatic detection systems."
He added, "To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare."
Facebook is under fire after the alleged terrorist used the platform to livestream his massacre at a mosque in Christchurch, with some New Zealand businesses vowing to boycott the service. Increasingly, critics are pointing to what they say is a lack of investment and controls from Facebook to safeguard against issues like misinformation and violent content, which can spread quickly on the service.