

A court in California has found that Twitter and the Islamic State of Iraq and Syria (ISIS) are not friends with benefits, after dismissing a lawsuit that claimed that the struggling microblogging come messaging social network is responsible for ISIS’s activities on their platform.
The lawsuit, first filed in January, claimed that ISIS’s presence on Twitter constituted material support for the terror group, and sought to hold Twitter responsible for an attack that was planned and discussed on the platform.
In particular, the suit filed by the family of two men shot and killed in Jordan stated that Twitter knowingly permitting ISIS to use its network to spread propaganda, attract new recruits and raise funds for terrorist activities.
U.S. District Judge William H. Orrick ruled, while noting that the ISIS-related killings were “horrific,” that the Communications Decency Act immunizes Twitter from liability for the content created by third parties.
“As horrific as these deaths were … Twitter cannot be treated as a publisher or speaker of ISIS’s hateful rhetoric and is not liable under the facts alleged,” Judge Orrick ruled.
“Apart from the private nature of Direct Messaging, plaintiffs identify no other way in which their Direct Messaging theory seeks to treat Twitter as anything other than a publisher of information provided by another information content provider,” he added according to The Verge, before adding “Twitter’s Direct Messaging feature “does not remove the transmission of such messages from the scope of publishing activity under section 230(c)(1).”
Interestingly, the judge didn’t completely close the door on the case and noted that the family has the option to amend and refile the case on different grounds.
The ruling is the first in what could be a series of similar cases targeting social media platforms for allowing members of ISIS to use their networks, including a case filed against Twitter, Google, Inc. and Facebook, Inc. in June by the by the father of a victim of the Paris Islamic terrorist attacks in November 2015 on the basis that the companies “provided material support” to the terrorist group.
While social media sites do have a moral, and in some cases legal need to police content, when you’re the likes of Facebook with 1.4 billion users, or even Twitter with 313 million users, policing that content perfectly is about as likely as seeing a unicorn on the 101 into San Francisco, and by that we mean an actual unicorn, not a startup.
Common sense has prevailed with this ruling, and it can only be hoped that it will again in subsequent cases.
THANK YOU