Culture wars: MSU professor explores the dark patterns of online harassment

Five years after the world’s largest harassment campaign, the Internet remains a place for organized hatred and hoaxes

Posted

Imagine waking up one morning to a new reality where overnight you reached the top of the far right’s hit list. The opening scene sounds like the making of Nicholas Cage’s next starring role, but the scenario is far from fiction.

Earlier this month, R. David Edelman, a cybersecurity expert at the Massachusetts Institute of Technology, woke up to an email from a friend informing him he was the face of a new Twitter hoax identifying him as the whistleblower who exposed Trump’s controversial phone call with the president of Ukraine.

Death threats ensued, and the rest Edelman might’ve been able to predict himself.

This chain of events started with a meme containing a half-truth — Edelman served as technology policy adviser for the Obama administration until 2017 — and ended in tens of thousands of people online convinced that Edelman was a subversive threat to society — a “dark pattern.”

A dark pattern is an online conversation geared to make a person do something they ordinarily wouldn’t in real life, said Liza Potts, professor of digital culture at Michigan State University. Potts is the coauthor of “Values versus Rules in Social Media Communities,” a chapter in “Digital Ethics,” a new anthology due out this summer.

Potts compared “dark patters” to the spread of conspiracy theories, “You think for an inkling that there is some truth in there, and it appeals to you,” she said. “Then the answer is easy. I shall run in this direction with you and that is why we look at it as dark patterns.”
The Gamergate controversy, the Internet’s most notorious harassment campaign built on a hoax, acted as a proxy war against a female game designer, Zoe Quinn, along with proponents of gender equity in video games.

Since then, large social media platforms have done little to alter their guidelines or moderation to stop defamatory online campaigns from spilling over into real life consequences. In fact, social media conglomerates, extremist groups, politicians and anyone else who can benefit from harvesting personal data, have followed a similar rhetorical blueprint laid out by Gamergate to influence and target online users, said Potts.

In “Values versus Rules,” Potts and coauthor Michael Trice, a rhetoric professor at the Massachusetts Institute of Technology, argue social media monopolies such as Facebook, Twitter and the popular discussion site, Reddit, need to shift from governing by rules to governing by ethics.

“We need new guidance and new ways of being able to handle the harms that are being caused,” said Trice, who has studied harassment campaigns, specifically in online gaming communities, since 2014. “A lot of that is ethical in nature and a lot of it is governance in nature. And the idea that platforms can base all of that ethic around one value, like free speech, it’s just nonsense. No ethical code has one value.”

Ethos of Gamergate

Gamergate was coined in 2014 by actor Adam Baldwin when he retweeted a YouTube video that broke down Quinn’s vengeful ex-boyfriend’s 9,000-word post on 4chan accusing her of sleeping with a writer for the gaming site Kotaku to get positive reviews on her first major project, “Depression Quest.”

Baldwin, who is no relation to the famous Baldwin Brothers of Long Island, was previously best known for his role as Animal Mother in “Full Metal Jacket.” Part of what propels the perceived truth of these rumor-filled hate brigades is the backing of celebrities, especially those with strong Twitter followings.

“People leading these movements are looking less at publishers and more on platforms. What message do I push on Reddit versus what message do I push on Twitter? It’s not: ‘Let me make sure which article goes here.’ Next, it’s ‘which groups do you align yourself with,’” Potts said.

The claims from Quinn’s ex turned out to be false, as the writer had never even reviewed her game, but the breakup post triggered huge resentment among male gamers that “women are out to destroy video games,” wrote blogger Kathy Sierra for “Serious Pony,” in a piece recounting her experience as the target of a neo-Nazi harassment campaign in 2007.

From 4chan to Twitter to the alt-right news site Breitbart, the ethos of the Gamergate campaign hardly varied as a result of the channels it passed through. As Trice and Potts explain in their chapter, the Internet can act as a conduit of rage for destructive communities that lack accountability.

When a self-identifying ‘wizard,’ an involuntarily celibate white male, creeps out of their dark 4chan safe space to join a brigade calling for the death of Quinn, there are no authorities in the cyber or real world to stop them. When once famous figures such as alt-right troll Milo Yiannopoulos and the libertarian academic Christina Hoff Sommers joined the brigade they did little to “abate the original harassment and rumormongering even as the veneer of media criticism focused on challenging feminist culture critics was added to the discussion,” wrote Potts and Trice.

The ethos of mischief was also apparent in the recruiting of online users to engage in the harassment and doxxing (releasing private information on the Internet) of individuals caught in the crosshairs of Gamergate. What better way to prove your intelligence than to publicly embarrass a woman of a higher status?

While the subreddit group GamerGhazi marks Jan. 29, 2015, as the close of Gamergate -- heralding the trolls as the victors — Trice and Potts provide examples that show why similar trolling groups still exist today and the tactics for organizing hate is nonpartisan.

Gaining mass appeal

The trolls’ online misogyny-fueled fury did not begin with Gamergate. Feminist gaming critic Anita Sarkeesian received death and rape threats as far back as 2012. She felt forced to leave her home after she was doxxed, with her home address posted online by trolls. Before Sarkeesian, the blogger Kathy Sierra was doxxed in 2007 and her assailant, Andrew “weev” Auernheimer, was later profiled in Huffington Post and The New York Times. The other trolls regard him as a “hacktivist Hero.”

The violent threats and misogynist fantasies of Gamergate still thrive today on sites such as Twitter, Facebook and Reddit. The latter hosts a discussion group called KotakuInAction which describes itself as “the main hub for Gamergate.”

KotakuInAction, KiA for short, upholds the dubious identity of Gamergate supporters as champions of journalism ethics and antagonists of political correctness. This reflects the shift of the campaign’s rhetoric when alt-right flamethrowers like Yiannopoulos, who at the time wrote for Breitbart, got involved. During the height of Gamergate, Reddit responded by writing new guidelines that KiA’s rules reflect.

“KiA is a community that has formed around this idea that journalists are not serving their needs. So, they have ‘approved’ media sources and ‘unapproved’ media sources and they have rules about allowing traffic. So, it’s less about serving anyone than about how they manage the way information comes into the community,” Trice said.

With 116,000 members and highly enforced rules, Trice and Potts refer to KiA as the “muted” Gamergate, but point to how their rules also offer a “how-to” on online harassment and conjure up emotion-fueled threads. The key difference from the attack style popularized by Twitter users is that KiA discourages users from engaging in troll-like behavior in their guidelines which comprise 10 rules, a veiled allusion to the ’90s cult movie “Fight Club.”

Rule #1: Don’t Be a Dickwolf, a term from a rape joke in the gamer community, which states all the ways that the group could get flagged from using derogatory slurs when addressing other users to brigading, a phenomenon KiA describes as outsiders coming to the group “to start shit after they’ve been linked.”

Another example of how they centralize their discussions is Rule #6: Archive as much as you can. This is done to maintain a clean reputation and keep all the commentary within KiA away from challengers. The pattern creates an “echo chamber,” where someone who once may have been level-headed on an issue is only fed back what they want to hear, said Potts. This dark pattern is certainly not limited to KiA or extremist discussion forums.

“If you watch YouTube enough you will suddenly find yourself just hearing the same information over and over. So, if you go down that rabbit hole on one idea about a conspiracy theory, suddenly you have 20-something videos,” said Potts.

“We are taught that in order to find absolute truth to look at multiple sources, but it’s not emphasized enough to know what legitimate sources are.”

Leftbook and cycles of oppression

MSU graduate Rebekah Small assisted Trice and Potts in aggregating examples of communities subverting guidelines by highlighting the shaming groups of Leftbook, where discussions range from “tacky weddings to late-stage capitalism,” Small wrote.

Leftbook is an unofficial term for the many groups with a “far-left political affiliation” that resort to shaming groups such as “That’s it, I’m wedding shaming” or “That’s it, I’m nail shaming” to air out grievances on social conventions, racism or down-right trashy centerpieces.

The groups’ guidelines, like KiA’s, suggest that ganging up on individuals in and outside of the group, along with doxxing them were once an issue. Leftbook’s shaming groups and KiA both have implicit or explicit rules on not reporting to platform administrators.

“Both sides have pretty heavy distrust of a lot of institutions within the U.S. Certain institutions like the media, the Right has more distrust of than the Left. The Left is far less trusting of law enforcement,” said Trice, pointing out that while the Internet can be used to challenge oppression, such as in the Arab Spring, it can also become a vehicle of oppression.

Groups on Facebook avoid being policed by admins through so-called “modmins,” individuals in the group who are tasked with adding new members and responding to conflict. Modmins help the group avoid racking up flagged comments that could lead to the group’s end. KiA uses its moderators in the same manner.

Shaming groups that are geared toward swapping stories from bad manicures to bad reception playlists, while larger in membership, operate similar to “10-15,” a Facebook group of federal border agents which shared obscene images of dead migrants and U.S. Rep. Alexandria Ocasio-Cortez, D-N.Y., The photos were leaked in July, causing a national uproar.
Small said the Facebook groups create environments similar to the Stanford Prison Experiment, where a feigned identity and lack of moderation produce an environment where perpetrators of violence and harassment can distance themselves from guilt.

Governance vs. ethics

Shifts in the ethical backbone of an industry have happened before. Trice and Potts argue there was a general shift in engineers’ code of ethics after World War II.

In his article “The Ethics of Expediency: Classical Rhetoric, Technology, and the Holocaust,” Steven Katz, an assistant professor of English at North Carolina State University, analyzes how Hitler created a new set of morals that “prioritized scientific and engineering efficiency over human life,” wrote Trice and Potts.

During the Holocaust, Nazi Germany adapted Soviet technology to improve its gas vans. The move was done to alleviate soldiers from the psychological trauma of shooting women and children execution-style, by distancing the actor from the implications of their actions.

By the second half of the 20th century, the National Society of Professional Engineers, American Society of Mechanical Engineers and The World Medical Association — which formed by allied groups in response to problems of medical practice — updated their canons to prioritize public welfare. Previously, the groups’ code of ethics reflected individual integrity, according to Trice and Potts.

While the chapter doesn’t directly offer solutions to “amplifiers of hate,” as Sierra called them, Trice and Potts pointed to Wikipedia as a neutral community of citizens of the Internet working for a common good: accessible, accurate information.

The authors suggest having a shared ethos that promotes the professional development of the greater public is a key principle in creating a safer, more productive social media platform.
In 2014, Facebook released an in-house study that labeled its own service as a channel for emotional contagion. The study was made as a response to academic research suggesting that Facebook had a negative impact on well-being.

The experiment sampled 700,000 users and tracked how the filtering of positive and negative comments from friends and other content would impact the subject’s future posts. The results of tampering with the individual’s feeds produced mixed results, however basing an individual’s well-being on their ability to continue posting selfies for praise from friends hardly paints a full picture of that individual’s life satisfaction.

The ethics of the way the experiment manipulated users is debated among scholars. But the findings of the study actually may enable social media leaders to continue shirking responsibility in the spread of misinformation on their platforms.

Potts said that the ultimate reason companies like Facebook are not held accountable for its content is simply the result of “corporate capitalism,” something that won’t be disappearing any time soon.

“If we can come up with algorithms, why can’t we come up with one with stop lights to block horrible images? Like, this is not hard,” said Potts. “This is not a technology problem. This is a leadership problem.”

Comments

No comments on this story | Please log in to comment by clicking here
Please or to add your comment

Connect with us