THE CEO OF TWITTER, JACK DORSEY AND THE BOARD OF DIRECTORS OF TWITTER, SHOULD BE ARRESTED AND PROSECUTED FOR THE PROMOTION OF HATE CRIMES AND MURDER

Congress has called out both Facebook and Twitter for the promotion of hate speech and even terroristic acts.
While it appears that Facebook is now putting a stop to white supremacists on Facebook and the spreading of their hate? It seems the CEO of Twitter Jack Dorsey and the Board of Directors of Twitter will never do the same to stop the spread of hate and white supremacy and their racist bullshit on Twitter. Why? Because it turns out of they actually went after the White Supremacists spreading hate on Twitter? Why they would be having to ban a whole lot of Republican Politicians, including that proven bigot, racist, misogynist pig, treasonous Cock Sucker for Putin, Donald J Trump and the rest of his white supremacist cunts Trumpanzees whose tweets would violate those algorythyms that show white supremacists spreading hate on Twitter.

Why can’t Twitter stop Trump’s hateful tweets about Ilhan Omar?
https://www.theguardian.com/us-news/2019/apr/26/trump-ilhan-omar-jack-dorsey-tweet-remove
Congresswoman received death threats following video Trump posted – but he didn’t technically violate the rules
The rules just aren’t the same for Donald Trump as they are for the rest of us. Twitter CEO Jack Dorsey apparently admitted as much this week on a phone call with Minnesota representative Ilhan Omar.
As reported by the Washington Post, Dorsey, often criticized for his inaction when it comes removing hateful and threatening content from the platform, was asked by Omar why he hadn’t taken down a video posted by Trump earlier in the month. The video, which spliced together misleading and out of context comments from Omar about the issue of Islamophobia with footage of the 9/11 attacks, was clearly targeted harassment to anyone who saw it.

Indeed Omar said she saw a sharp uptick in death threats after it was posted. But since it came from Trump, and not an average Twitter user, there was nothing Dorsey could do, he said. The tweet didn’t technically violate the rules in any case, he added. (Anyone who has used Twitter will understand the frustration at trying to parse what exactly those rules are.)
The call with Omar came the same day Dorsey met with Trump in the White House, a meeting in which the president is said to have largely complained about his follower count.
“During their conversation, [Dorsey] emphasized that death threats, incitement to violence and hateful conduct are not allowed on Twitter,” the social media platform said in a statement to the Post. “We’ve significantly invested in technology to proactively surface this type of content and will continue to focus on reducing the burden on the individual being targeted.”
Dorsey has said in the past that the public interest value of Trump’s tweets outweigh the harm of his occasional calls for violence or threats against foreign governments or members of the media
“Blocking a world leader from Twitter or removing their controversial tweets would hide important information people should be able to see and debate,” the company explained in statement last year. “It would also not silence that leader, but it would certainly hamper necessary discussion around their words and actions.”
More recently Dorsey declined to say whether a hypothetical direct call from Trump to murder a journalist would be grounds for his banishment.

The permissive double standard applied to Trump on Twitter hasn’t stopped him from regularly suggesting that he is himself being treated unfairly. This week Trump tweeted that Twitter doesn’t “treat me well as a Republican. Very discriminatory…”
In fact it seems more probable that Republicans such as Trump are given much more leeway than others.
A recent story from Motherboard reported that one of the reasons Twitter has had trouble removing white supremacist content from the platform, as they have largely done with the Islamic State, is that the algorithms they use might end up affecting Republican politicians.
“When you’re a star, they let you do it. You can do anything,” Trump once said in a prescient boast.
When it comes to his behavior as reported in the Mueller report, as well as his social media habits, it seems like Trump behaves like he can get away with anything. So far he’s right.

Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too.
A Twitter employee who works on machine learning believes that a proactive, algorithmic solution to white supremacy would also catch Republican politicians.
At a Twitter all-hands meeting on March 22, an employee asked a blunt question: Twitter has largely eradicated Islamic State propaganda off its platform. Why can’t it do the same for white supremacist content?
An executive responded by explaining that Twitter follows the law, and a technical employee who works on machine learning and artificial intelligence issues went up to the mic to add some context. (As Motherboard has previously reported, algorithms are the next great hope for platforms trying to moderate the posts of their hundreds of millions, or billions, of users.)
With every sort of content filter, there is a tradeoff, he explained. When a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Society, in general, accepts the benefit of banning ISIS for inconveniencing some others, he said.
In separate discussions verified by Motherboard, that employee said Twitter hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.
The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued.
There is no indication that this position is an official policy of Twitter, and the company told Motherboard that this “is not [an] accurate characterization of our policies or enforcement—on any level.” But the Twitter employee’s comments highlight the sometimes overlooked debate within the moderation of tech platforms: are moderation issues purely technical and algorithmic, or do societal norms play a greater role than some may acknowledge?
Though Twitter has rules against “abuse and hateful conduct,” civil rights experts, government organizations, and Twitter users say the platform hasn’t done enough to curb white supremacy and neo-Nazis on the platform, and its competitor Facebook recently explicitly banned white nationalism. Wednesday, during a parliamentary committee hearing on social media content moderation, UK MP Yvette Cooper asked Twitter why it hasn’t yet banned former KKK leader David Duke, and “Jack, ban the Nazis” has become a common reply to many of Twitter CEO Jack Dorsey’s tweets. During a recent interview with TED that allowed the public to tweet in questions, the feed was overtaken by people asking Dorsey why the platform hadn’t banned Nazis. Dorsey said “we have policies around violent extremist groups,” but did not give a straightforward answer to the question. Dorsey did not respond to two requests for comment sent via Twitter DM.
Twitter has not publicly explained why it has been able to so successfully eradicate ISIS while it continues to struggle with white nationalism. As a company, Twitter won’t say that it can’t treat white supremacy in the same way as it treated ISIS. But external experts Motherboard spoke to said that the measures taken against ISIS were so extreme that, if applied to white supremacy, there would certainly be backlash, because algorithms would obviously flag content that has been tweeted by prominent Republicans—or, at the very least, their supporters. So it’s no surprise, then, that employees at the company have realized that as well.

This is because the proactive measures taken against ISIS are more akin to the removal of spam or child porn than the more nuanced way that social media platforms traditionally police content, which can involve using algorithms to surface content but ultimately relies on humans to actually review and remove it (or leave it up.) A Twitter spokesperson told Motherboard that 91 percent of the company’s terrorism-related suspensions in a 6 month period in 2018 were thanks to internal, automated tools.
The argument that external experts made to Motherboard aligns with what the Twitter employee aired: Society as a whole uncontroversially and unequivocally demanded that Twitter take action against ISIS in the wake of beheading videos spreading far and wide on the platform. The automated approach that Twitter took to eradicating ISIS was successful: “I haven’t seen a legit ISIS supporter on Twitter who lasts longer than 15 seconds for two-and-a-half years,” Amarnath Amarasingam, an extremism researcher at the Institute for Strategic Dialogue, told Motherboard in a phone call. Society and politicians were willing to accept that some accounts were mistakenly suspended by Twitter during that process (for example, accounts belonging to the hacktivist group Anonymous that were reporting ISIS accounts to Twitter as part of an operation called #OpISIS were themselves banned).
That same eradicate-everything approach, applied to white supremacy, is much more controversial.
“Most people can agree a beheading video or some kind of ISIS content should be proactively removed, but when we try to talk about the alt-right or white nationalism, we get into dangerous territory, where we’re talking about [Iowa Rep.] Steve King or maybe even some of Trump’s tweets, so it becomes hard for social media companies to say all of this ‘this content should be removed,’” Amarasingam said.
“There’s going to be controversy here that we didn’t see with ISIS, because there are more white nationalists than there are ISIS supporters, and white nationalists are closer to the levers of political power in the US and Europe than ISIS ever was.”
In March, King promoted an open white nationalist on Twitter for the third time. King quote tweeted Faith Goldy, a Canadian white nationalist. Earlier this month, Facebook banned Goldy under the site’s new policy banning white nationalism; Goldy has 122,000 followers on Twitter and has not been banned at the time of writing. Last year, Twitter banned Republican politician and white nationalist Paul Nehlen for a racist tweet he sent about actress and princess Meghan Markle, but prior to the ban, Nehlen gained a wide following on the platform while tweeting openly white nationalist content about, for example, the “Jewish media.”
Any move that could be perceived as being anti-Republican is likely to stir backlash against the company, which has been criticized by President Trump and other prominent Republicans for having an “anti-conservative bias.” Tuesday, on the same day Trump met with Twitter’s Dorsey, the President tweeted that Twitter “[doesn’t] treat me well as a Republican. Very discriminatory,” Trump tweeted. “No wonder Congress wants to get involved—and they should.”
JM Berger, author of Extremism and a number of reports on ISIS and far-right extremists on Twitter, told Motherboard that in his own research, he has found that “a very large number of white nationalists identify themselves as avid Trump supporters.”
“Cracking down on white nationalists will therefore involve removing a lot of people who identify to a greater or lesser extent as Trump supporters, and some people in Trump circles and pro-Trump media will certainly seize on this to complain they are being persecuted,” Berger said. “There’s going to be controversy here that we didn’t see with ISIS, because there are more white nationalists than there are ISIS supporters, and white nationalists are closer to the levers of political power in the US and Europe than ISIS ever was.”

Twitter currently has no good way of suspending specific white supremacists without human intervention, and so it continues to use human moderators to evaluate tweets. In an email, a company spokesperson told Motherboard that “different content and behaviors require different approaches.”
“For terrorist-related content we’ve a lot of success with proprietary technology but for other types of content that violate our policies—which can often [be] much more contextual—we see the best benefits by using technology and human review in tandem,” the company said.
Twitter hasn’t done a particularly good job of removing white supremacist content and has shown a reluctance to take any action of any kind against “world leaders” even when their tweets violate Twitter’s rules. But Berger agrees with Twitter in that the problem the company is facing with white supremacy is fundamentally different than the one it faced with ISIS on a practical level.
“With ISIS, the group’s obsessive branding, tight social networks and small numbers made it easier to avoid collateral damage when the companies cracked down (although there was some),” he said. “White nationalists, in contrast, have inconsistent branding, diffuse social networks and a large body of sympathetic people in the population, so the risk of collateral damage might be perceived as being higher, but it really depends on where the company draws its lines around content.”
But just because eradicating white supremacy on Twitter is a hard problem doesn’t mean the company should get a pass. After Facebook explicitly banned white supremacy and white nationalism, Motherboard asked YouTube and Twitter whether they would make similar changes. Neither company would commit to making that explicit change, and referred us to their existing rules.
“Twitter has a responsibility to stomp out all voices of hate on its platform,” Brandi Collins-Dexter, senior campaign director at activist group Color Of Change told Motherboard in a statement. “Instead, the company is giving a free ride to conservative politicians whose dangerous rhetoric enables the growth of the white supremacist movement into the mainstream and the rise of hate, online and off.”
Twitter and YouTube Won’t Commit to Ban White Nationalism After Facebook Makes Policy Switch
Following a Motherboard investigation, Facebook banned white nationalism and white separatism. But Twitter and YouTube, two platforms with their own nationalism problems, won’t commit to following Facebook’s lead.
https://www.vice.com/en_us/article/a3xgq5/why-wont-twitter-treat-white-supremacy-like-isis-because-it-would-mean-banning-some-republican-politicians-too
Keegan Hankes, a research analyst for the Southern Poverty Law Center’s (SPLC) Intelligence Project, told Motherboard in a phone call Tuesday “I think there is absolutely a need for other platforms to adopt similar policies” to Facebook. “Both YouTube and Twitter have been amongst the worst at getting this content dealt with on their platforms,” Hankes added.
Hankes added the SPLC does have a relationship with YouTube, but with Twitter not nearly as much.
“They’ve been very, very stubborn and basically unwilling to ban people that are outright white supremacists from their platform,” he added. When they do ban people, they’re happy to play a game of whack-a-mole, instead of having a systematic approach, he added.
“They’re still basically at square one for the most part,” Hankes said.
Tech Platforms Obliterated ISIS Online. They Could Use The Same Tools On White Nationalism.
Before killing 50 people during Friday prayers at two mosques in Christchurch, New Zealand, and injuring 40 more, the gunman apparently decided to fully exploit social media by releasing a manifesto, posting a Twitter thread showing off his weapons, and going live on Facebook as he launched the attack.
The gunman’s coordinated social media strategy wasn’t unique, though. The way he manipulated social media for maximum impact is almost identical to how ISIS, at its peak, was using those very same platforms.
While most mainstream social networks have become aggressive about removing pro-ISIS content from the average user’s feed, far-right extremism and white nationalism continue to thrive. Only the most egregious nodes in the radicalization network have been removed from every platform. The question now is: Will Christchurch change anything?
A 2016 study by George Washington University’s Program on Extremism shows that white nationalists and neo-Nazi supporters had a much larger impact on Twitter than ISIS members and supporters at the time. When looking at about 4,000 accounts of each category, white nationalists and neo-Nazis outperformed ISIS in number of tweets and followers, with an average follower count that was 22 times greater than ISIS-affiliated Twitter accounts. The study concluded that by 2016, ISIS had become a target of “large-scale efforts” by Twitter to drive supporters off the platform, like using AI-based technology to automatically flag militant Muslim extremist content, while white nationalists and neo-Nazi supporters were given much more leeway, in large part because their networks were far less cohesive.
The answer to why this kind of cross-network deplatforming hasn’t happened with white nationalist extremism may be found in a 2018 VOX-Pol report authored by the same researcher as the George Washington University study cited above: “The task of crafting a response to the alt-right is considerably more complex and fraught with landmines, largely as a result of the movement’s inherently political nature and its proximity to political power.”
But tech companies and governments can easily agree on removing violent terrorist content; they’ve been less inclined to do this with white nationalist content, which cloaks itself in free speech arguments and which a new wave of populist world leaders are loath to criticize. Christchurch could be another moment for platforms to draw a line in the sand between what is and is not acceptable on their platforms.
Moderating white nationalist extremism is hard because it’s drenched in irony and largely spread online via memes, obscure symbols, and references. The Christchurch gunman ironically told the viewers of his livestream to “Subscribe to Pewdiepie.” His alleged announcement post on 8chan was full of trolly dark web in-jokes. And the cover of his manifesto had a Sonnenrad on it — a sunwheel symbol commonly used by neo-Nazis.
And unlike ISIS, far-right extremism isn’t as centralized. The Christchurch gunman and Christopher Hasson, the white nationalist Coast Guard officer who was arrested last month for allegedly plotting to assassinate politicians and media figures and carry out large-scale terror attacks using biological weapons, were both inspired by Norwegian terrorist Anders Breivik. Cesar Sayoc, also known as the “MAGA Bomber,” and the Tree of Life synagogue shooter, both appear to have been partially radicalized via 4chan and Facebook memes.
It may now be genuinely impossible to disentangle anti-Muslim hate speech on Facebook and YouTube from the more coordinated racist 4chan meme pages or white nationalist communities growing on these platforms. “Islamophobia happens to be something that made these companies lots and lots of money,” Whitney Phillips, an assistant professor at Syracuse University whose research includes online harassment, recently told BuzzFeed News. She said this type of content leads to engagement, which keeps people using the platform, which generates ad revenue.
A spokesperson from Twitter provided BuzzFeed News with a copy of its policy on extremism, in regards to how it moderates ISIS-related content. “You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people,” the policy reads. “This includes, but is not limited to, threatening or promoting terrorism.” The spokesperson would not comment specifically on whether using neo-Nazi or white nationalist iconography on Twitter also counted as threatening or promoting terrorism.
Like the hardcore white nationalist and neo-Nazi iconography used by the Christchurch gunman, the more entry-level memes that likely radicalized the MAGA bomber, and the pipeline from mainstream social networks to more private clusters of extremist thought described by the Tree of Life shooter, ISIS’s social media activity before the large-scale crackdown in 2015 had similar tentpoles. It organized around hashtags, distributed propaganda in multiple languages, transmitted coded language and iconography, and siphoned possible recruits from larger mainstream social networks into smaller private messaging platforms.
Twitter CEO Jack Patrick Dorsey should in fact, along with the Board of Directors of Twitter be held responsible for any acts of violence, or any acts of murder committed by Twitter White Supremacist Users, or encouraged by Donald J Trump or any White Supremacist.
As we can see from the stories above? Twitter CEO Jack Patrick Dorsey and the Twitter Board of Directors: Omid Kordestani, Patrick Pichette, Martha Lane Fox, Ngozi Okonjo-Iweala, David Rosenblatt, Brett Taylor, Robert Zoellick and Twitter’s CFO Ned Segal, should be held accountable for all the hate, all the bigotry, all the misogyny, all the shit that Donald J Trump spews on a daily basis to his psychotic, white supremacist twitter followers, to all the white supremacists, all the Republican racists, who also spew racist, bigoted, misogynist bullshit on Twitter. This also includes all the hate, all the bigotry and all the lgbtphobia spewed by White Supremacist Christians who use Twitter to spread their hate and bigotry.
These actions of Trump and his racist punks are in fact, responsible for some of the mass murders and the incredible high murder counts of lgbts in the US and across the world.
If Twitter’s CEO Jack Patrick Dorsey, and the Board of Directors Kordestani Pichette, Fox, Okonjo-Iweala, Rosenblatt, Taylor, Zoellick and CFO Segal are going to rake in millions of dollars in profits, and place the revenue of advertisers, that Racist Trump and his white supremacists bring into the coffers, bank accounts and pockets of these people? And if these actions of hate spewed by these white supremacists by Donald J Trump or by ChristoTaliban Fascists on Twitter caused a death? Then they should be held responsible for it.

BECAUSE QUITE FRANKLY AND HONESTLY? TWITTER CEO JACK PATRICK DORSEY, CFO NED SEGAL, AND THE BOARD OF DIRECTORS OF TWITTER, ALONG WITH TWITTER STOCKHOLDERS?
SHOULD NOT PROFIT FROM THE SPREAD OF HATE, THE SPREAD OF BIGOTRY, THE SPREAD OF MISOGYNY, THE SPREAD OF WHITE SUPREMACY OR OTHER FORMS OF HATE, SIMPLY BECAUSE THEY ARE PERPETRATED BY THE SUPPOSED PRESIDENT, DONALD J TRUMP, OR WHITE SUPREMACISTS WHO HAPPEN TO BE REPUBLICAN, OR CHRISTIAN WHO SPEW HATE AND DEATH AND BIGOTRY ON TWITTER AND HIDE IT BEHIND A RELIGIOUS RIGHT.
AND AS SUCH? JACK DORSEY, NED SEGAL AND ALL THE BOARD OF DIRECTORS OF TWITTER SHOULD HAVE TO PAY FOR THEIR CRIMES.

You must be logged in to post a comment.