By Matthew Reitman, RealClearLife
It’s an age-old debate playing out over modern technology: how to balance security and privacy.
With each ISIS-inspired attack in the West, governments put more pressure on Facebook, Google, Telegram, and Twitter to police extremist activity on their platforms.
Despite tech companies responding to these calls, content used to recruit and radicalize attackers still finds its way online.
Critics within the technology community say the platforms can do better, while civil libertarians wonder if it’s appropriate to task social networks with counterterrorism responsibilities in the first place.
In June, Facebook, Microsoft, Twitter, and YouTube announced they had joined forces to create the Global Internet Forum to Counter Terrorism to make their platforms “hostile to terrorists and violent extremists.”
The forum is focused on technical solutions, research, and sharing information with law enforcement.
Information sharing, while a controversial topic, is the best thing the companies can do, according to Neil Shortland, who runs UMass Lowell’s Center for Terrorism & Security Studies.
“It’s an inescapable truth that extremists use their platforms.”
“Therefore they are part of the problem, but they’re also part of the solution,” he told RealClearLife.
Experts caution against lumping all social media together since ISIS wields each one differently.
Open platforms like Facebook and Twitter are helpful in sharing their message “far and wide” to broad audiences, but they also expose their accounts to being shut down or monitored by the government, says research analyst David Knoll from CNA, a Virginia-based analysis organization.
(Learn More. Courtesy of Newsy and YouTube)
Apps with end-to-end encryption, like Telegram, allow ISIS members or sympathizers the protection they need to communicate messages with operational information that inspire or direct plots, he explained.
Telegram was reportedly used by ISIS terrorists to plan and coordinate the Paris attacks.
The company has closed many chat rooms, or channels, the group used to communicate, but pro-ISIS channels are still active today.
The group’s Amaq Agency regularly publishes claims of responsibility for its violence around the world via Telegram.
It has become a digital safe haven where terrorists can recruit and plot with relative impunity.
In July, the Indonesian government threatened to ban Telegram from its country unless more was done to crack down on extremist and hate speech.
Its founder pledged to create a team of moderators to remove “terrorist-related” content.
“We’re no friends of terrorists—in fact, every month we block thousands of ISIS-related public channels,” Pavel Durov said, according to the Wall Street Journal.
Therein, lies the challenge.
No matter what Telegram, Twitter, YouTube, or other companies seems to do, more extremist content keeps appearing.
It’s a digital version of whack-a-hole that Knoll’s CNA colleague, Vera Zakem argues can’t be done manually.
“It is virtually impossible to go after every piece of propaganda,” she said.
That’s a stark reality that many aren’t willing to accept, especially since experts like Knoll and Shortland expect ISIS to press its followers to conduct more attacks as the group loses territory in Iraq and Syria.
“You’ll still see a lot of the one-to-one connections,” Shortland explained.
“But instead of grooming with the goal of bringing people over to join them, it’ll be with the goal of convincing them to act in their own country and helping them along the way.”
21 percent of suspects in 38 ISIS plots in the United States over three years had direct contact with a member of the group using this technique, according to a report published in West Point Combating Terrorism Center’s Sentinel.
Influence of these so-called “virtual plotters” varies from an advisory role, suggesting weapons or targets, to basically quarterbacking the plot up to minutes and seconds before the attack.
Much of this plotting happens on Telegram, but the conversation often starts elsewhere online.
Open online platforms like Twitter act to exacerbate grievances and help ISIS recruiters using strategies similar to cults in the real world, said Dr. Alexander Meleagrou-Hitchens, Research Director at George Washington University’s Program on Extremism.
“They’ll cut you off from people who don’t agree with the group’s ideology and they’ll put you in echo chambers,” he explained to RealClearLife.
Today, Twitter monitors its platform for this activity more aggressively than it did in 2014 when ISIS seized Mosul, Iraq’s second-largest city.
In a company report from March this year, it said 376,890 accounts were suspended for posting terrorism-related content in the last half of 2016.
Melagrou-Hitchens says the accounts that are blatantly pro-ISIS don’t last longer than a day now.
The speed is largely the result of Twitter’s “proprietary spam-fighting tools,” a variant of algorithms used by Facebook and Google to automate the process of flagging content that violates their terms of service.
Despite the sophisticated machine learning algorithms, however, ISIS propaganda still manages to find its way online.
The problem, Zakem explains, is that what’s considered extremist is subjective.
A post that gets flagged because it’s offensive, may not be removed if it’s not explicitly calling for acts of violence or showing them.
“It’s always been this gray line and it’s not enforced across the board and that in and of itself is part of the challenge,” Zakem says.
Critics, like Hany Farid, say there’s more that can be done, arguing the automated tools don’t work as they should.
“It’s inexcusable,” he says, that the bomb-making video watched by the Manchester, England bomber was taken down from YouTube only to reappear as recently as July 18.
“For years, these companies have been bamboozling politicians to say ‘we can’t do this’… but this is bullsh-t,” said Farid, a Dartmouth College computer science professor.
“One day or another, they are going to have to wake up to the fact that extremist groups have weaponized their platforms to great effect.” He thinks there’s a better way.
eGylph, developed by Farid and his colleagues at the Counter Extremism Project, uses “hashing” technology to give a unique fingerprint for photos, video, and audio that are already determined to be extremist content.
Instead of retroactively pulling down content, it stops extremist propaganda from being uploaded to the Internet in the first place.
The hashing technology works in a similar way to the PhotoDNA algorithm Farid made with Microsoft in 2008 to fight child pornography.
Using a database of extremist content built by the Counter Extremism Project, eGlyph will match the fingerprint of propaganda even if it’s been altered in any way.
Civil libertarians argue the use of automated tools like eGlyph will inevitably exceed their original design intentions, either intentionally or accidentally.
Farid says the Counter Extremism Project will license eGlyph to companies specifically so they can enforce their terms of service regarding this kind of content.
(Dartmouth Professor Hany Farid explains how eGLYPH technology is used to find and flag the worst of the worst violent extremist content. Courtesy of the Counter Extremism Project and YouTube)
He acknowledges that news clips containing a still image of an ISIS propaganda clip may get flagged, but proposes a “whitelist” for pre-approved publishers to circumvent the algorithm.
In the end, whether the companies have tools like eGlyph or not, the decision to remove the content is up to them.
Videos from preachers known to inspire attackers like Ahmad Musa Jibril or Anwar Al-Awlaki, who was considered dangerous enough to be targeted in a drone strike, are still online and can be accessed in a few seconds.
They attack democracy and secularism while justifying Sharia law, but that’s not illegal and it doesn’t violate YouTube policy.
“You can get a lot done as a radicalizer without breaking any laws or violating any terms of service,” noted Meleagrou-Hitchens.
That said, only a small percent of people that develop a sympathetic view to ISIS will act violently in their name.
In fact, most individuals that are inspired to commit an attack are not radicalized based on their online activity alone, he said.
Technology is certainly part of the problem, but just also a fraction of the solution as well.
“You still have to deal with the root causes,” Shortland said. He explains that it boils down to the individuals starting the conversations and the vulnerabilities that make those listening want to join them.
“It’s not a technical solution, in my opinion,” Shortland said. “It’s extremism facilitated via technology, but the issue is the people.”
(Learn More. Tech companies are becoming more proactive about blocking online propaganda from international terror groups such as ISIS. Courtesy of TODAY and YouTube. Posted on Nov 17, 2016)
Meleagrou-Hitchens understands the dilemma tech companies like YouTube face, especially when politicians call for them to do more after terrorist attacks.
“I don’t envy people having to decide how to deal with it,” he said.
“It’s about accepting the limits of what can be done and being realistic without overstepping the boundaries of liberal values.”
In a broader sense, this is the crux of the issue.
The balancing act between freedom and security is a perennial one that’s been happening long before the attacks in Orlando, San Bernadino, or even since 9/11.
“The internet has added an extra element to what has been this ongoing civil liberties counterterrorism debate,” Meleagrou-Hitchens explained.
Maybe this debate won’t ever be finished since advances in technology manage to introduce new variables to the equation every few years.
But Knoll thinks that Americans aren’t having the right conversation.
Instead of absolute security, which the research analyst says doesn’t exist, matters regarding national security must be thought of as occurring on a spectrum that’s defined by risk versus freedom.
Knoll says policymakers and tech CEOs should be engaging in an open dialogue, asking: “what sort of risk are we willing to tolerate in order to have our open society?”
This probably won’t be resolved anytime soon.
It’s more likely that the question gets more difficult to answer as ISIS reverts from a proto-state to an insurgency.
Researchers Craig Whiteside and Daniel Milton have demonstrated the increased emphasis the group places on its propaganda as its territory decreases.
If anything, the online reach of ISIS may achieve greater dexterity since improvements in technology will give virtual plotters more capabilities.
”Terrorist groups will be able to plot more and more online as these communications devices become more robust,” Knoll said.
Typically, untrained fighters are convinced to carry out simple attacks like the ones involving knives and vehicles seen in London recently.
But Knoll hypothesizes virtual reality could give attackers the type of training they’d normally get on the battlefield without ever leaving home.
Fortunately, that chilling reality hasn’t arrived yet.
While there’s a lot of debate on this subject, the consensus seems to gel around one idea: do something.
“Better a diamond with a flaw than a pebble without,” as the Confucian saying goes.
“We’ve got to keep chipping away this…because the alternative is unacceptable,” Farid says.
“You do the best you can, and you keep going and you keep adapting.”
Original post http://www.realclearlife.com/technology/tech-companies-can-fight-online-radicalization-smarter/
(Learn More about the Counter Extremism Project. Courtesy of the Counter Extremism Project and YouTube)