Latest posts by Doug Davis (see all)
- Bill Nye Loses An Earth Day Discussion - April 27, 2017
- Weaponized Autism: The Crowdsourced Intelligence Phenomenon - April 24, 2017
- Twitter Censoring Ramps Up Further Against Conservatives - April 21, 2017
Free speech is under attack across America and nowhere is this more evident than on the social platform of Twitter. The powers that be at Twitter have been implementing new algorithms to suppress what it considers to be abusive or low-quality content, as previously reported by Liberty Nation’s Kit Perez. This virtue signaling is making free speech more than a bit challenging for conservatives. It is quite an evolution for a company that used to refer to itself as “…the free speech wing of the free speech party.”
First, a little history. In early 2015, Twitter rolled out more aggressive “abuse filters.” The stated goal was to screen out threatening or violent messages automatically. Within months, as reported in the Washington Post, some users started noticing that tweets, including one critical of Hillary Clinton, were disappearing from timelines depending on the geographic location of the computer accessing the feed. Twitter later announced that they believed that their abuse filter had caused an inconsistency, and they had resolved the problem.
In 2016, Twitter announced the Twitter Trust & Safety Council which included a rainbow coalition of liberal special interest groups to provide “input on our safety products, policies, and programs.” By late 2016, shortly after Twitter part-time CEO Jack Dorsey announced an anti-harassment pledge, Twitter began suspending the accounts of several prominent alt-right figures. As reported on February 12th, 16th, and March 2nd here on Liberty Nation, in 2017 Twitter expanded automated steps to prevent abuse and remove tweets from feeds that contained “potentially sensitive content,” and “identifying and collapsing potentially abusive and low-quality replies.”
Creating an algorithm to detect and filter violence and direct hate speech is challenging, but it is possible, and Twitter’s process for that seemed relatively straightforward. Twitter sent the offender a time-out notice indicating that they were limited so that only their followers could see their tweets for a designated amount of time. Many argued that these time-outs were arbitrary and unnecessary, but at least they were out in the open.
But automating the filtering and collapsing of “potentially sensitive content” or “potentially abusive or low-quality replies” is an incredibly challenging process. So what many say is happening now is a practice called shadowbanning. There may be some algorithms involved, but as reported in Breitbart, there are also white and blacklists maintained for favored and unfavored users. Supposedly un-favored accounts aren’t deleted, or openly timed-out, but their tweets are given less priority. The result is to limit the un-favored account’s audience without him ever knowing.
There are several theories on the details of shadowbanning. The first theory, according to Mike Keen, is that algorithms target an unfavored users’ supporters. The evidence he presents suggests that President Trump’s tweets, for example, aren’t de-prioritized, but that supportive replies are — so even when supportive replies have more likes and re-tweets than negative responses, the positive replies disappear from the feed. Mr. Keen later observed that this de-prioritization could be overcome by rapidly responding to your tweet, but Twitter has adapted their algorithm and now Trump supporters are being rapidly put down in the stack and anti-Trump people are not. The second theory is a general de-prioritization of an un-favored account’s tweets. An example of this would be Mike Cernovich, who reported in March that his daily subscriber increase dropped from approximately five hundred to about one hundred, that his daily impressions dropped by thirty to fifty percent, and that his average number of re-tweets dropped from eight hundred to one hundred seventy-three. The third theory, advanced by Lawrence Person, involves identifying the power users in a Twitter user’s follower base and making it harder for those power users to see the unfavored user’s tweets. If power users cannot see your tweet, they cannot retweet.
Scott Adams, the creator of Dilbert, also suspected that he was the target of a shadowban. He recently noted on his blog that Twitter contacted him and, “The official answer is that no one, including me, is shadowbanned on Twitter. It has never happened.” As pointed out by Mr. Adams, the anecdotal evidence for some shadowbanning taking place is overwhelming, and for Twitter’s sake, if these events are accidental, they should straighten them out immediately. Not only for their stock price, but also for their survival.
Mark Grabowski has written an excellent analysis that points out, despite liberal protestations to the contrary, private property rights are not an absolute guarantee of protection against First Amendment civil claims when you transform your property into the public square, particularly when you are limiting only the free speech that you don’t like. It is arguable that Twitter has done both, whether intentionally or not.