SAN FRANCISCO/BENGALURU - The Friday massacre at two New Zealand mosques, live-streamed to the world, was not the first internet broadcast of a violent crime, but it showed that stopping gory footage from spreading online persists as a major challenge for tech companies despite years of investment.
The massacre in Christchurch was live-streamed by an attacker through his Facebook profile for 17 minutes, according to a copy seen by Reuters. Facebook said it removed the stream after being alerted to it by New Zealand police.
But a few hours later, footage from the stream remained on Facebook, Twitter and Alphabet Inc's YouTube, as well as Facebook-owned Instagram and WhatsApp. It also remained available on file-sharing websites such as New Zealand-based Mega.nz.
People who wanted to spread the material had raced to action, rapidly repackaging and distributing the video across many apps and websites within minutes.
Facebook, Twitter, YouTube and Mega on Friday said they were taking action to remove the copies.
Other violent crimes that have been live-streamed include a father in Thailand in 2017 who broadcast himself killing his daughter on Facebook. After more than a day and 370,000 views, Facebook removed the video.
In the United States, the assault in Chicago of an 18-year-old man with special needs, accompanied by anti-white racial taunts, in 2017, and the fatal shooting of a man in Cleveland, Ohio, that same year, were also live-streamed.
Facebook, the world's largest social media network with about 2.3 billion monthly users around the world, tripled the size of its safety and security team to 30,000 people over the last three years to respond more quickly to reports of offensive content. It has also focused on developing artificial intelligence systems to catch material without the need for users to report it first.
But the viral reach of yet another obscene video caused politicians around the globe on Friday to voice the same conclusion: Tech companies are failing.
As the massacre video continued to spread, former New Zealand Prime Minister Helen Clark in televised remarks said companies had been slow to remove hate speech.
"What’s going on here?" she said, referring to the shooter's ability to livestream for 17 minutes. “I think this will add to all the calls around the world for more effective regulation of social media platforms.”
At least some expect Facebook to suffer consequences.
Facebook "helped provide a platform for today's horrific attack and will undoubtedly be called into question for facilitating the spread of this," said Clement Thibault, analyst at financial data website Investing.com.
The company's profit margins fell last year as it spent to address the challenge, and stock analysts are bracing for further short-term hits to its profitability, whether or not regulations materialize and despite relatively few alternatives for advertisers.
Shares of Facebook closed down 2.5 percent on Friday.
After Facebook stopped the Christchurch livestream, it told moderators to delete any copies or complimentary comments on the attack.
"All content praising, supporting and representing the attack and the perpetrator(s) should be removed from our platform," Facebook instructed content moderators in India, according to an email seen by Reuters.
Users intent on sharing the violent video took several approaches. Copies reviewed by Reuters showed that some users had recorded the video playing on their own phones or computers to create a new version with a digital fingerprint different from the original to evade companies' detection systems. Others shared shorter sections or screenshots from the gunman’s livestream. The shooting begins about six minutes into a 17-minute video reviewed by Reuters. It starts with the attacker driving to a mosque.
On internet discussion forum Reddit, users strategized to avoid the actions of moderators, directing each other to video apps which had yet to take action and sending footage through messaging apps.
Besides acting on user complaints about copies, YouTube said on Friday that it was trying to identify copies with an automated tool that finds videos likely to be violent in nature based on a combination of the title and description of the video, the characteristics of the user uploading it and objects in the footage.
Exact matches of removed material cannot be uploaded again at YouTube and Facebook.
Facebook said it, too, was relying on user complaints and an artificial intelligence system to identify violent footage and send it to moderators.
It also was using audio technology to detect Christchurch broadcast footage, in which gunshots could be heard and music played in the attacker's car, according to a copy reviewed by Reuters.
Researchers and entrepreneurs specializing in detection systems said they were surprised that users in the initial hours after the attack were able to circumvent Facebook's tools.
Joshua Buxbaum, chief executive of Irvine, California-based moderation technology company WebPurify, said Facebook and other services could employ image recognition or other types of AI to identify copies in additional ways.
"I would certainly think given the budgets they have that they would have the ability to root out these videos," Buxbaum said.
Experts said the companies could set their detection tools and removal processes to be more aggressive, but YouTube and Facebook have said they want to be careful not to remove sensitive videos that either come from news organizations or have news value.
Politicians in multiple countries said social media companies need to be more vigilant.
"This is a case where you’re giving a platform for hate," Democratic U.S. Senator Cory Booker, who is running for president, said at a campaign event in New Hampshire. "That’s unacceptable, it should have never happened, and it should have been taken down a lot more swiftly."
Britain's interior minister, Sajid Javid, said on Twitter, "Enough is enough."