上位 200 件のコメント表示する 500

[–]justcool393 3643ポイント3644ポイント x3 (1028子コメント)

Hi everyone answering these questions. I have a "few" questions that I, like probably most of reddit would like answers to. Like a recent AMA I asked questions in, the bold will be the meat of the question, and the non-bolded will be context. If you don't know an answer to a question, say so, and do so directly! Honesty is very much appreciated. With that said, here goes.

Content Policy

  1. What is the policy regarding content that has distasteful speech, but not harassing? Some subreddits have been known to harbor ideologies such as Nazism or racist ones. Are users, and by extension subreddits, allowed to behave in this way, or will this be banned or censored?

  2. What is the policy regarding, well, these subreddits? These subreddits are infamous on reddit as a whole. These usually come up during AskReddit threads of "where would you not go" or whenever distasteful subreddits are mentioned.

  3. What actually is the harassment policy? Yes, I know the definition that's practically copypasta from the announcement, but could we have examples? You don't have to define a hard rule, in fact, it'd probably be best if there was a little subjectivity to avoid lawyering, but it'd be helpful to have an example.

  4. What are your thoughts on some people's interpretation of the rules as becoming a safe-space? A vocal group of redditors interpreted the new harassment rules as this, and as such are not happy about it. I personally didn't read the rules that way, but I can see how it may be interpreted that way.

  5. Do you have any plans to update the rules page? It, at the moment, has 6 rules, and the only one that seems to even address the harassment policy is rule 5, which is at best reaching in regards to it.

  6. What is the best way to report harassment? For example, should we use /r/reddit.com's modmail or the contact@reddit.com email? How long should we wait before bumping a modmail, for example? 6. Who is allowed to report harassment? Say I'm a moderator, and decide to check a user's history and see they've followed around another user to 20 different subreddits posting the same thing or whatnot. Should I report it to the admins?

Brigading

  1. In regards to subreddits for mocking another group, what is the policy on them? Subreddits that highlight other places being stupid or whatever, such as /r/ShitRedditSays, /r/SRSsucks, the "Badpire", /r/Buttcoin or pretty much any sub dedicated to mocking people frequently brigade each other and other places on reddit. SRS has gone out of it's way to harass in the past, and while bans may not be applied retroactively, some have recently said they've gotten death threats after being linked to from there.

  2. What are the current plans to address brigading? Will reddit ever support NP (and maybe implement it) or implement another way to curb brigading? This would solve very many problems in regards to meta subreddits.

    1. Is this a good definition of brigading, and if not, what is it? Many mods and users can't give a good explanation of it at the moment of what constitutes it. This forces them to resort to in SubredditDrama's case, banning voting or commenting altogether in linked threads, or in ShitRedditSays' case, not do anything at all.

Related

  1. What is spam? Like yes, we know what obvious spam is, but there have been a number of instances in the past where good content creators have been banned for submitting their content.
  2. Regarding the "Neither Alexis or I created reddit to be a bastion of free speech" comment, how do you feel about this, this, this or this? I do get that opinions change and that I could shit turds that could search reddit better than it does right now, but it's not hard to see that you said on multiple occasions, especially during the /r/creepshots debacle, even with the literal words "bastion of free speech".
  3. How do you plan to implement the new policy? If the policy is substantially more restrictive, such as combating racism or whatnot, I think you'll have a problem in the long run, because there is just way too much content on reddit, and it will inevitably be applied very inconsistently. Many subreddits have popped back up under different names after being banned.
  4. Did you already set the policy before you started the AMA, and if so, what was the point of it? It seems like from the announcement, you had already made up your mind about the policy regarding content on reddit, and this has made some people understandably upset.
  5. Do you have anything else to say regarding the recent events? I know this has been stressful, but reddit is a cool place and a lot of people use it to share neat (sometimes untrue, but whatever) experiences and whatnot. I don't think the vast majority of people want reddit to implode on itself, but some of the recent decisions and remarks made by the admin team (and former team to be quite honest) are quite concerning.

[–]spez[S,A] 1523ポイント1524ポイント x2 (775子コメント)

I’ll try

Content Policy

  1. Harboring unpopular ideologies is not a reason for banning.

  2. (Based on the titles alone) Some of these should be banned since they are inciting violence, others should be separated.

  3. This is the area that needs the most explanation. Filling someone’s inbox with PMs saying, “Kill yourself” is harassment. Calling someone stupid on a public forum is not.

  4. It’s an impossible concept to achieve

  5. Yes. The whole point of this exercise is to consolidate and clarify our policies.

  6. The Report button, /r/reddit.com modmail, contact@reddit.com (in that order). We’ll be doing a lot of work in the coming weeks to help our community managers respond quickly. Yes, if you can identify harassment of others, please report it.

Brigading

  1. Mocking and calling people stupid is not harassment. Doxxing, following users around, flooding their inbox with trash is.

  2. I have lots of ideas here. This is a technology problem I know we can solve. Sorry for the lack of specifics, but we’ll keep these tactics close to our chest for now.

Related

  1. The content creators one is an issue I’d like to leave to the moderators. Beyond this, if it’s submitted with a script, it’s spam.

  2. While we didn’t create reddit to be a bastion of free speech, the concept is important to us. /r/creepshots forced us to confront these issues in a way we hadn’t done before. Although I wasn’t at Reddit at the time, I agree with their decision to ban those communities.

  3. The main things we need to implement is the other type of NSFW classification, which isn’t too difficult.

  4. No, we’ve been debating non-stop since I arrived here, and will continue to do so. Many people in this thread have made good points that we’ll incorporate into our policy. Clearly defining Harassment is the most obvious example.

  5. I know. It was frustrating for me to watch as an outsider as well. Now that I’m here, I’m looking forward to moving forward and improving things.

[–]codyave 242ポイント243ポイント  (263子コメント)

3) This is the area that needs the most explanation. Filling someone’s inbox with PMs saying, “Kill yourself” is harassment. Calling someone stupid on a public forum is not.

Forgive me for a pedantic question, but what about telling someone to "kill yourself" in a public forum, will that be harassment as well?

[–]spez[S,A] 682ポイント683ポイント  (212子コメント)

I can give you examples of things we deal with on a regular basis that would be considered harassment:

  • Going into self help subreddits for people dealing with serious emotional issues and telling people to kill themselves.
  • Messaging serious threats of harm to users towards themselves or their families.
  • Less serious attacks - but ones that are unprovoked and sustained and go beyond simply being an annoying troll. An example would be following someone from subreddit to subreddit repeatedly and saying “you’re an idiot” when they aren’t engaging you or instigating anything. This is not only harassment but spam, which is also against the rules.
  • Finding users external social media profiles and taking harassing actions or using the information to threaten them with doxxing.
  • Doxxing users.

It’s important to recognize that this is not about being annoying. You get into a heated conversation and tell someone to fuck off? No one cares. But if you follow them around for a week to tell them to fuck off, despite their moving on - or tell them you’re going to find and kill them, you’re crossing a line and that’s where we step in.

[–]_username_goes_here_ 87ポイント88ポイント  (13子コメント)

I like this type of list.

I would be interested in clarification of the following:

A)Does a collection of people engaged in not-quite-across-the-line harassment start to count as full-on harassment by virtue of being in a group - even if said group is not organized? What about if someone instigates and many people respond negatively? If a person of color were to go into coontown and start posting for example - the sub would jump on them with hate, but in that place it would about par for the course.

B)At what point do the actions of a minority of users run the risk of getting a subreddit banned vs just getting those users banned?

[–]Senray 393ポイント394ポイント  (103子コメント)

The content creators one is an issue I’d like to leave to the moderators. Beyond this, if it’s submitted with a script, it’s spam.

Uh, this would ban all bots

OKAY THANKS FOR THE REPLIES I GET IT

[–]spez[S,A] 268ポイント269ポイント  (48子コメント)

I meant specifically in regard to "content creators." For example, it used to be common that a site would write a script that automatically spammed multiple subreddits every time they wrote something.

[–]dowhatuwant2 902ポイント903ポイント x2 (199子コメント)

Vote counts, before and after, of a SRS brigade

SRD thread about /u/potato_in_my_anus getting shadowbanned

SRD talks about SRS doxxing

/r/MensRights on /u/violentacrez being doxxed

SRSters sking for a brigade

More brigading

An entire post of collected evidence

An entire thread that contains evidence of brigading, along with admin bias in favor of SRS

Here's a PM that mentions doxxing and black mailing

Direct evidence of /u/violentacrez being doxxed

SRS getting involved in linked threads as of 2/21/14

SRSters asking for a witch-hunt after being banned from /r/AskReddit

"Organic" voting. Downvotes on a two day thread after SRS gets to it.

User actually admits to voting in linked threads

Is there any more serious evidence of SRS abuse? All of this is 8 months or older a mix of different dates, so some more recent evidence would be greatly appreciated. It would be good to know if we're in the right here or if we need to reevaluate; however, I'm fairly certain that we're not the shit posters here. I can foresee another bout of SRS related drama flaring up soon. It would be nice to find something recent to support our position because then nobody would be able to claim that SRS has changed.

Let's please avoid duplicates. Go for the two deep rule: don't post something as evidence it can be reached within one click of a source. If you have to go deeper, then feel free to post it.

Update: Evidence post of SRS organizing to ruin the lives of multiple people.

Update: the admin /u/intortus is no longer a part of the admin team and is now a mod of SRS, as shown by this picture (as of 3/19/14). This is clear evidence that at least one admin is affiliated with SRS in a clear way, thus giving credibility to the notion that SRS has or had at least partial admin support.

Update: There is also evidence that SRS is promoting or otherwise supporting the doxxing of /u/violentacrez. RationalWiki has a section on Reddit and the moderator there is pro-SRS; in the section on /u/violentacrez, there is personal information (name and location) about where he lives. I won't link to it, but you can look for yourself.

Update: An entire post of evidence that SRS brigades. Courtesy of /u/Ayevee

Update: Here's SRS brigading a 2 weak old thread, as of 4/27. Ten downvotes since it was submitted.

Update: An album of SRD mods banning a user and removing his posts when he calls out SRD mods for being in line with SRS

Subreddit analysis, where SRS posters are also posters in SRD en masse (highest on the list).

Source

[–]glasnostic 33ポイント34ポイント  (0子コメント)

Would you call repeatedly name-linking a person in a drama sub that person is banned from harassment?

Would you consider it doxxing or harassment to create a user name from another reddit user's account name on a non-private social media site, and then start posting in that person's local city sub?

What about doxxing outside reddit by tying a username to that person's real identity so that it will be easily found by google?

All of this happened to me and the admins told me their hands were tied, and they would do nothing to help me.

[–]DashFerLev 151ポイント152ポイント  (7子コメント)

Mocking and calling people stupid is not harassment. Doxxing, following users around, flooding their inbox with trash is.

Yesterday I reported an SRS user who followed an SRS link to a comment and told another user they should kill themselves. /u/sporkicide replied and the comment was removed.

How much does this have to happen for it to count as harassment to get the subreddit shut down?

I mean, the number of targets links is relatively small and it's pretty easy to spot the SRS users (just browse by new). All you have to do is look.

Also their head mod bragged about DDOSing and harassing Voat's hosts and paypal if it's not enough to constantly target and hurl abuse at other Redditors.

That sub is the most toxic on the website.

[–]Georgy_K_Zhukov 865ポイント866ポイント  (451子コメント)

Recently you made statements that many mods have taken to imply a reduction in control that moderators have over their subreddits. Much of the concern around this is the potential inability to curate subreddits to the exacting standards that some mod teams try to enforce, especially in regards to hateful and offensive comments, which apparently would still be accessible even after a mod removes them. On the other hand, statements made here and elsewhere point to admins putting more consideration into the content that can be found on reddit, so all in all, messages seem very mixed.

Could you please clarify a) exactly what you mean/envision when you say "there should also be some mechanism to see what was removed. It doesn't have to be easy, but it shouldn't be impossible." and b) whether that is was an off the cuff statement, or a peek at upcoming changes to the reddit architecture?

[–]spez[S,A] 835ポイント836ポイント  (437子コメント)

There are many reasons for content being removed from a particular subreddit, but it's not at all clear right now what's going on. Let me give you a few examples:

  • The user deleted their post. If that's what they want to do, that's fine, it's gone, but we should at least say so, so that the mods or admins don't get accused of censorship.
  • A mod deleted the post because it was off topic. We should say so, and we should probably be able to see what it was somehow so we can better learn the rules.
  • A mod deleted the post because it was spam. We can put these in a spam area.
  • A mod deleted a post from a user that constantly trolls and harasses them. This is where I'd really like to invest in tooling, so the mods don't have to waste time in these one-on-one battles.

edit: A spam area makes more sense than hiding it entirely.

[–]TheBQE 706ポイント707ポイント  (128子コメント)

I really hope something like this gets implemented! It could be very valuable.

The user deleted their post. If that's what they want to do, that's fine, it's gone, but we should at least say so, so that the mods or admins don't get accused of censorship.

[deleted by user]

A mod deleted the post because it was off topic. We should say so, and we should probably be able to see what it was somehow so we can better learn the rules.

[hidden by moderator. reason: off topic]

A mod deleted the post because it was spam. No need for anyone to see this at all.

[deleted by mod] (with no option to see the post at all)

A mod deleted a post from a user that constantly trolls and harasses them. This is where I'd really like to invest in tooling, so the mods don't have to waste time in these one-on-one battles.

Can't you just straight up ban these people?

[–]GustavoFrings 96ポイント97ポイント  (54子コメント)

Can't you just straight up ban these people?

They come back. One hundreds of accounts. I'm not exaggerating or kidding when I say hundreds. I have a couple users that have been trolling for over a year and a half. Banning them does nothing, they just hop onto another account.

[–]spez[S,A] 136ポイント137ポイント  (40子コメント)

That's why I keep saying, "build better tools." We can see this in the data, and mods shouldn't have to deal with it.

[–]Georgy_K_Zhukov 107ポイント108ポイント  (9子コメント)

  • A mod deleted the post because it was off topic. We should say so, and we should probably be able to see what it was somehow so we can better learn the rules.
  • A mod deleted the post because it was spam. No need for anyone to see this at all.

That's all well and good, but how is this distinction made? Would mods now have a "soft" remove and "hard" remove option for different situation? I can see situation where in even in /r/AskHistorians we might want to just go with the "soft" option, but would this be something that mods still have discretion over, or would the latter have to be reported for admins to take action on?

[–]Shanix 671ポイント672ポイント  (37子コメント)

So basically a deletion reason after the [deleted] message?

  • [deleted: marked as spam]
  • [deleted: user deleted]
  • [deleted: automoderator]

That'd be nice.

[–]FSMhelpusall 282ポイント283ポイント  (58子コメント)

What will keep mods from wrongly classifying comments they don't like as "spam" to prevent people from seeing them?

Edit: Remember, you currently have a problem of admin* (Edit of edit, sorry!) shadowbanning, which was also intended only for spam.

[–]mobiusstripsearch 423ポイント424ポイント  (105子コメント)

What standard decides what is bullying, harassment, abuse, or violent? Surely "since you're fat you need to commit suicide" is all four and undesirable. What about an individual saying in private "I think fat people need to commit suicide" -- not actively bullying others but stating an honest opinion. What about "I think being fat is gross but you shouldn't kill yourself" or "I don't like fat people"?

I ask because all those behaviors and more were wrapped in the fatpeoplehate drama. Surely there were unacceptable behaviors. But as a consequence a forum for acceptable behavior on the issue is gone. Couldn't that happen to other forums -- couldn't someone take offense to anti-gay marriage advocates and throw the baby out with the bath water? Who decides what is and isn't bullying? Is there an appeal process? Will there be public records?

In short, what is the reasonable standard that prevents anti-bullying to become bullying itself?

[–]spez[S,A] 316ポイント317ポイント  (93子コメント)

"since you're fat you need to commit suicide"

This is the only one worth considering as harassment. Lobbing insults or saying offensive things don't automatically make something harassment.

Our Harassment policy says "Systematic and/or continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them," which I think is pretty clear.

[–]Darr_Syn 1131ポイント1132ポイント  (505子コメント)

Thanks for doing this AMA.

I'm a moderator of more than a few NSFW subreddits, including /r/BDSMcommunity and /r/BDSM, and as I stated in the teaser announcement earlier this week: this decision, and the specific wording, is worrying.

I want to specifically address this:

Anything that incites harm or violence against an individual or group of people

As well as your earlier comment about things being seen as "offensive" and "obscene".

There are sections of the world, and even the United States, where consensual BDSM and kink are illegal.

You can see where this is the type of announcement that raises more than a few eyebrows in our little corner of the world.

At what point do the minority opinion and positions be accepted as obscene, offensive, and unwanted?

BDSM between two consenting adults has been seen and labeled as both offensive and obscene for decades now.

[–]spez[S,A] 1033ポイント1034ポイント  (442子コメント)

I can tell you with confidence that these specific communities are not what we are referring to. Not even close.

But this is also why I prefer separation over banning. Banning is like capital punishment, and we don't want to do it except in the clearest of cases.

[–]SpawnPointGuard 231ポイント232ポイント  (69子コメント)

But this is the problem we've been having. Even if we're not on the list, the rules seem so wishy washy that none of us know how to even follow them. There are a lot of communities that don't feel safe because of that. The last wave of sub bans used reasoning that didn't apply. In the case of /r/NeoFAG, it was like the admins didn't even go there once before making the decision. It was a sub that was critical of the NeoGAF forums, such as the leader using his position to cover up a sexual assault he committed against a female user he met up with. /r/NeoGAFInAction was banned as well without justification.

All I ask is that you please reevaluate the previous bans.

[–]The_Year_of_Glad 189ポイント190ポイント  (68子コメント)

I can tell you with confidence that these specific communities are not what we are referring to. Not even close.

This is why it is important for you to clarify exactly what you mean by "illegal" in the original post of rules. E.g. British law on BDSM and BDSM-related media is fairly restrictive.

[–]mydeca 438ポイント439ポイント  (172子コメント)

Perhaps you could go into more detail about the communities that you are referring to? I think that would be very relevant here.

[–]AlphaWolf101 1380ポイント1381ポイント  (241子コメント)

When will something be done about subreddit squatters? The existing system is not working. Qgyh2 is able to retain top mod of many defaults and large subreddits just because he posts a comment every two months. This is harming reddit as a community when lower mods are veto'd and removed by someone who is only a mod for the power trip. Will something be done about this?

[–]spez[S,A] 666ポイント667ポイント  (214子コメント)

I agree it's a problem, but we haven't thought through a solution yet.

[–]ZadocPaet 242ポイント243ポイント  (53子コメント)

Here's an easy solution. Change the rules for subreddit request to make it so that if mods aren't actively moderating a sub then a user can reddit request the sub.

As it stands right now the mod must not be active on reddit for 90s in order for a reddtor to request the subreddit in /r/redditrequest.

Just change it to the moderator must have been active in their sub within the past 90s days. That means approving posts, voting, commenting, posting, answering mod mails, et cetera.

[–]theNYEHHH 193ポイント194ポイント  (9子コメント)

But you can see modlogs and check if they're doing anything to help out in the subreddit. It's frustrating for the mods of /r/pics etc when the person who is most in charge of the subreddit doesn't even check the modmail.

[–]XIGRIMxREAPERIX 467ポイント468ポイント  (256子コメント)

/u/spez I am confused on the illegal portion. Are we allowed to talk about pirating, but not link it in /r/tpb Can we have a discussion in /r/trees about why we should produce marijuana, but no how to produce it?

This seems like a very large grey area in terms of everything.

[–]spez[S,A] 600ポイント601ポイント  (238子コメント)

Nothing is changing in Reddit's policy here. /r/trees is totally fine. At a very high level, the idea is that we will ban something if it is against the law for Reddit to host it, and I don't believe you examples qualify.

[–]DEATH-BY-CIRCLEJERK 785ポイント786ポイント  (282子コメント)

Hi Steve,

I think this is a question I've not seen asked or addressed anywhere on reddit before, so I hope this is a good contribution to this AMA and discussion.

Do you see an issue with more and more default subreddits configuring their automoderator to automatically remove comments from users who have just joined? On numerous occasions a friend or family member has created an account after me telling them about reddit only to find that when I go to their overview page and follow the permalink to their actual comments that it is missing. I presume moderators are doing this to mitigate trolls or something but I think it might become a systemic problem if all of the defaults move in this direction. How is anyone going to be able to get enough karma to get out of the automod filter if none of their comments get seen?

Thanks.

[–]spez[S,A] 589ポイント590ポイント  (256子コメント)

Agreed, this is a problem if true.

The first step is give the mods better tools so they don't need to resort to tactics like this.

[–]doug3465 425ポイント426ポイント  (118子コメント)

How long will that step take?

Admins have been promising this for years. Adding a realistic time estimate to all of these mod-tools comments would make sense.

Edit: They said 6 months, and then their chief engineer quit because of "unreasonable demands."

[–]spez[S,A] 117ポイント118ポイント  (22子コメント)

When it comes to software development, committing to exact dates is a fool's errand.

However, I can say with great confidence it won't take six months.

[–]Deimorz[A] 210ポイント211ポイント  (35子コメント)

I made a comment the other day addressing the 6 month timeline thing, I'm going to post it again here:


I think there's been a fair amount of confusion about some of this, which is certainly understandable because so much happened so quickly. I think it's important to understand that these three things happened in this sequence:

  1. Alexis gives timelines to mods for specific things
  2. I get assigned to focus on moderator issues
  3. Ellen resigns and Steve comes back as CEO

It's definitely not that we don't think we're going to have anything done in 3 or even 6 months, we're absolutely going to get quite a bit done. That's a very long time to get things done when there are resources devoted to it, it's mostly just the order that things happened in that have made this confusing. Specifically, we want to make sure that we're focusing on the right things first, so it's important that we start having conversations directly with mods to find out what that is, instead of being committed to working on the two things Alexis mentioned. They're both definitely important issues, but I don't know if they're the most important ones. That's why we've been trying to step back from those promises a bit, not because we think they're impossible but because we're not sure if they're even the right promises.

Steve coming back as CEO is also a really big step here. Even in the announcement post, he listed improving moderator tools as one of his top priorities. From talking with him so far, it's been very clear that this is something he wants to make sure we make some major improvements to soon, and I'm confident that he's going to make sure that we get a lot of updates made in the fairly near future.

Overall, things are definitely still not settled, and I expect they probably still won't be for a little while yet. The last couple of weeks have been rough for everyone, but I think we're making some good steps now, and things are going to get better.

[–]airmandan 233ポイント234ポイント  (34子コメント)

Agreed, this is a problem if true.

Default moderator here. It's true, and it happens largely because we have no way to stop trolls who've been banned from registering 75 new accounts to skirt the ban. At one point, /r/gaming's AutoModerator configuration was removing upwards of 70% of submissions and comments posted to that subreddit.

I am personally opposed to these kinds of uses of AutoMod, including banwords lists that trigger auto-removal. OTOH, you are absolutely correct that the reason these things happen is because we just don't have the kind of tools needed to properly manage communities with millions and millions of members.

You get places with 10 humans, maybe 6 of which are active in any sense, trying to moderate a community of five million. And they can't even IP-ban. So AutoMod gets dialed up to 11, legitimate new users are the baby that gets thrown out with the bathwater, and the site suffers overall.

[–]Llim 139ポイント140ポイント  (39子コメント)

I moderate /r/Interstellar and can offer some input on this issue.

A few months ago we had a serious problem with users from /r/MoviesCirclejerk brigading our subreddit and creating new accounts to troll, spam, and harass users. So we resorted to having AutoModerator filter out all users that were less than two days old - it worked wonderfully.

You keep talking about implementing new "tools" to give moderators more control, but honestly AutoModerator works fantastically. What other kinds of tools do the admins have in mind?

[–]Vmoney1337 3289ポイント3290ポイント  (4163子コメント)

I guess I'll ask the question that everyone else wants to hear the answer to: What subreddits are you considering banning, and what would be your basis for doing so?

[–]spez[S,A] 1800ポイント1801ポイント x2 (3958子コメント)

We'll consider banning subreddits that clearly violate the guidelines in my post--the ones that are illegal or cause harm to others.

There are many subreddits whose contents I and many others find offensive, but that alone is not justification for banning.

/r/rapingwomen will be banned. They are encouraging people to rape.

/r/coontown will be reclassified. The content there is offensive to many, but does not violate our current rules for banning.

edit: elevating my reply below so more people can see it.

[–]jstrydor 1023ポイント1024ポイント  (2534子コメント)

We'll consider banning subreddits that clearly violate the guidelines in my post

I'm sure you guys have been considering it for quite a while, can you give us any idea which subs these might be?

[–]spez[S,A] 2143ポイント2144ポイント x4 (2435子コメント)

Sure. /r/rapingwomen will be banned. They are encouraging people to rape.

/r/coontown will be reclassified. The content there is offensive to many, but does not violate our current rules for banning.

[–]xlnqeniuz 629ポイント630ポイント  (879子コメント)

What do you mean with 'refclassified'?

Also, why wasn't this done with /r/Fatpeoplehate? Just curious.

[–]spez[S,A] 657ポイント658ポイント  (820子コメント)

I explain this in my post. Similar to NSFW but with a different warning and an explicit opt-in.

[–]EmilioTextevez 779ポイント780ポイント  (149子コメント)

Have you thought about simply revoking "offensive" subreddit's ability to reach /r/All? So only the users of those communities come across it when browsing Reddit?

[–]spez[S,A] 129ポイント130ポイント  (103子コメント)

That's more or less the idea, yes, but I also want to claim we don't profit from them.

[–]Sargon16 77ポイント78ポイント  (19子コメント)

How does it work then if someone gilds a post in one of the 'unsavory' subreddits? I mean reddit still gets the money right? Will you just disable gilding in those places?

Or here's an idea, donate revenue from the unsavory subreddits to charity.

[–]PicopicoEMD 380ポイント381ポイント  (142子コメント)

So could a subreddit equivalent to fph be made as long as there mods were clear about not allowing brigading and death threats, and actually enforced this.

It seems fph would qualify as distasteful but not harmful inherently (as long as it was modded correctly it wouldn't be).

Disclaimer: I didn't like fph.

[–]ChrisTaliaferro 368ポイント369ポイント  (103子コメント)

Honestly this sounds crazy to me, people suggest the killing of all blacks in coontown all the time.

I'm a black man, but I'm also a huge believer in free speech even in places like this where it isn't a legally protected right, so quite frankly I'm willing to put up with coontown if it means freedom across the board for everyone.

However,

If you're going to tell me that you can't talk about hating fat people or fantasizing about raping women, but can say "All niggers must die.", that's messed up and it really doesn't make me feel comfortable to be here as a person of color.

Edit: TL;DR, /r/coontown is responsible for things that are just as bad as some banned subs, either the banned ones come back or coontown should go.

2nd Edit: If you don't think /r/coontown is harassing outside of their sub, here's one of their regulars posting his thoughts on my reading Green Eggs and Ham to my son's second grade class in /r/trueblackfathers http://i.imgur.com/85u0wCY.png

3rd Edit: Here's a user casually talking about either killing all blacks or "sending them back" http://i.imgur.com/he9kVQp.png

[–]obadetona 713ポイント714ポイント  (677子コメント)

What would you define as causing harm to others?

[–]spez[S,A] 726ポイント727ポイント  (661子コメント)

Very good question, and that's one of the things we need to be clear about. I think we have an intuitive sense of what this means (e.g. death threats, inciting rape), but before we release an official update to our policy we will spell this out as precisely as possible.

Update: I added an example to my post. It's ok to say, "I don't like this group of people." It's not ok to say, "I'm going to kill this group of people."

[–]mydeca 440ポイント441ポイント  (88子コメント)

Yea, but how are you going to determine that the subreddit itself is at fault? There's going to be a few individuals in all subreddits that cause harm, how do you determine that the sub itself is at fault enough to be banned?

[–]spez[S,A] 320ポイント321ポイント  (70子コメント)

We won't formally change or policy until we have the tools to support it. Giving moderators better tools to deal with individuals is an important part of this process. Giving our employed community managers additional tools to assist the moderators is also required.

[–]IM_THAT_POTATO 350ポイント351ポイント  (9子コメント)

So you are saying that a subreddit being banned will most often be a result of the moderators failing to uphold the sitewide rules? Will there be a warning system? Will there be an appeal system?

Edit: Does this allow a moderator to tank a community easily?

[–]Adwinistrator 477ポイント478ポイント  (292子コメント)

Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

How will this be interpreted in the context of spirited debates between large factions of people (usually along ideological lines)?

The following example can usually be found on both sides of these conflicts, so don't presume I'm speaking about a particular side of a particular debate:

There have been many cases of people accusing others of harassment or bullying, when in reality a group of people is shining a light on someone's bad arguments, or bad actions. Those that now see this, voice their opinions (in larger numbers than the bad actor is used to), and they say they are being harassed, bullied, or being intimidated into silence.

How would the new rules consider this type of situation, in the context of bullying, or harassment?

[–]spez[S,A] 174ポイント175ポイント  (212子コメント)

Spirited debates are in important part of what makes Reddit special. Our goal is to spell out clear rules that everyone can understand. Any banning of content will be carefully considered against our public rules.

[–]abcabcdeabc 382ポイント383ポイント x4 (79子コメント)

I have been a redditor for a very long time, and I've been part of a range of kinds of communities that vary fairly significantly.

I am also a female who was raped, and this is something I have been opened about talking fairly frequently on reddit.

I disagree with the ban of the aforementioned sub, because I feel that it sets a precedent depending on what the society deems appropriate to think about, and what it does not.

Please note, that I can not and do not pretend to speak for any woman who was raped besides myself.

What I am concerned with is this distinct drawing of a line between the people who own the site, and the people who create the content on the site. Reddit appealed to me because it was the closest thing to a speaking democracy I could find in my entire existence, utilizing technology in a way that is almost impossible to recreate across large populations of people otherwise.

This sequence of events marks this as a departure from that construct. From today onwards, I know that I am not seeing clusters of people with every aspect of their humanity shown, as ugly as it may be sometimes. I feel that it is not the subreddit that causes subs like /r/rapingwomen to exist, but this stems from a larger cultural problem. Hiding it or sweeping it under a rug from the masses is not what solves the problem; I have already lived under those rules and I have seen them to be ineffective at best and traumatizing / mentally warping at worst.

People's minds should not be ruled over by the minds of other people, and that is what I feel this has become. Internet content is thought content, idea content. It is not the act of violence - these are two very separate things. You can construct a society that appears to value and cherish women's rights in the highest regard, and yet the truth can be the furthest thing from it.

I really would hope that you would reconsider your position. To take away the right of being able to know with certainty that one can speak freely without fear, I don't have many words to offer that fully express my sadness at that.

The problem is not the banning of specifics. The problem is how it affects how people reason afterwards about their expectations of the site and their interactions with others. It sets up new social constructs and new social rules, and will alter things significantly, even fractions of things you would not expect. It is like a butterfly effect across the mind, to believe you can speak freely, and to have that taken away.

[–]alexanderwales 437ポイント438ポイント  (66子コメント)

But you haven't clearly spelled out the rules. What does this:

Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

Even mean? It seems totally subjective.

[–]HungryMoblin 183ポイント184ポイント  (38子コメント)

That's a good idea, because I think what the community is seeking right now is straight guidelines that they can follow. /r/cringe for example, the sub actively takes a stance against off-site harassment (yes, including death threats), but it happens every time someone forgets to blur a username. This isn't the fault of the moderators at all, who are actively preventing harm, but the users. How do you intend on handling a situation like that?

[–]Final_Check_My_PC 249ポイント250ポイント  (24子コメント)

How do plan on determining who is an authentic member of a subreddit?

If I make a few posts to /r/ShitRedditSays and then go harass members of /r/kotakuinaction or /r/theredpill would that then be enough to get /r/shitredditsays banned?

How do you hope to combat strategies such as this?

[–]IM_THAT_POTATO 76ポイント77ポイント  (15子コメント)

Similar to NSFW, another type of content that is difficult to define, but you know it when you see it, is the content that violates a common sense of decency. This classification will require a login, must be opted into, will not appear in search results or public listings, and will generate no revenue for Reddit.

Is that the admins who are deciding what this "common sense of decency" is?

[–]DuhTrutho 106ポイント107ポイント  (16子コメント)

This is what everyone wants more clarification about hehe, what is the true justification for banning?

If you tried to go onto FPH and mention that you were fat you would be banned by the mods.

FPH was a relatively contained sub before the leaking happened, but is banning those who come onto your sub considered bullying?

In the same vein, if I were to go onto either /r/TwoXChromosomes or /r/Shitredditsays and post about mens rights, or women's rights with /r/TheRedPill I would get downvoted, ridiculed, and most likely banned.

Please define what you mean in detail.

[–]monsda 219ポイント220ポイント  (145子コメント)

Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

How will you determine that?

What I'm getting at is - how would you make a distinction between a sub like /r/fatpeoplehate, and a sub like /r/coontown?

[–]bl1y 20ポイント21ポイント  (0子コメント)

You would ban subs that engage in harassment, which Reddit defines as:

systematic and/or continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that Reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them

Can you elaborate on the italicized portion? What does it mean to be a safe platform to express ideas? Do you mean safe from physical harm and criminal harassment? If so, it seems redundant given (2). If not, what exactly does this mean?

[–]-Massachoosite 1234ポイント1235ポイント  (304子コメント)

Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

This needs to be removed.

There is no other way around it. It's too broad. Is /r/atheism bullying /r/christianity? Is /r/conservative bullying /r/politics?

We need opposing views. We need people whose stupidity clashes against our values. Most importantly, we need to learn how to deal with this people with our words. We need to foster an environment where those people are silenced not with rules, but with the logic and support of the community.

[–]spez[S,A] 500ポイント501ポイント  (264子コメント)

I'm specifically soliciting feedback on this language. The goal is to make it as clear as possible.

[–]RamsesThePigeon 184ポイント185ポイント  (20子コメント)

While we're on the topic of specific language, can we make it a goal to define what exactly is meant by each type of prohibited content?

Spam
Is someone who frequently posts "spamming," or does the word specifically describe content with that directs to advertisements and malware?

Anything Illegal
According to whose laws?

Publication of someone's private and confidential information
What constitutes "private and confidential?"

Anything that incites harm or violence
If I write a comment in which I suggest that the Muppets are guilty of hate-speech, and if my comment prompts someone to harass Kermit the Frog, am I at fault?

Anything that harasses, bullies, or abuses an individual or group of people
Others have touched on this one already. The question remains.

Sexually suggestive content featuring minors
If I tell the story of losing my virginity (at age sixteen), am I breaking a rule? What if I talk about sneaking into the women's locker room at age six?

[–]zk223 1270ポイント1271ポイント  (114子コメント)

Here you go:

No Submission may identify an individual, whether by context or explicit reference, and contain content of such a nature as to place that individual in reasonable fear that the Submitter will cause the individual to be subjected to a criminal act. "Reasonable fear," as used in the preceding sentence, is an objective standard assessed from the perspective of a similarly situated reasonable person.

[–]colechristensen 115ポイント116ポイント  (6子コメント)

Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

There is no language which is going to make this acceptable.

What this says is you are no longer to express negative opinions about any person or group.

Is http://stuffwhitepeoplelike.com/ harassment? It's funny, not hateful, but clearly singles out a single group. Is /r/blackpeopletwitter harassment? It can be pretty funny too (sure there are a minority of racists in there spreading hate)

How about berating Sean Hannity for his bullshit about waterboarding? Can we hate on Vladimir Putin?

In an open forum, people need to be able to be called out on their shit. Sometimes for amusement, sometimes for serious purposes. "Harassment" is ill defined. We can all agree that encouraging internet idiots to gather their pitchforks is almost always a bad idea (or maybe not, what about gathering petition signatures?)

There are a lot of fat people who are really full of themselves and spout nonsense about "loving your body" when in reality they're promoting hugely dangerous behaviors. Some of the reactionaries go way overboard as well – you end up trying and ultimately failing to make a line in the sand because there isn't any real distinction you can draw.

You can ban serious hate speech (which is hard to define, but still easy enough to see, like pornography), and you can ban brigading behaviors.

You can't ban "harassment" because there's no definition.

This hyper-sensitive culture that's arising is a real problem, and you're promoting it.

Some notes in a similar vein: http://www.ew.com/article/2015/06/08/jerry-seinfeld-politically-correct-college-campuses

[–]verdatum 710ポイント711ポイント  (166子コメント)

ITT: People who have been waiting to hit ctrl+v "save" for at least a day now.

[–]hansjens47 246ポイント247ポイント  (71子コメント)

www.Reddit.com/rules outlines the 5 rules of reddit. They're really vague, and the rest of the Reddit wiki has tonnes of extra details on what the rules actually imply.

What's the plan for centralizing the rules so they make up a "Content Policy" ?

[–]bhalp1 105ポイント106ポイント  (224子コメント)

I generally agree with the outline above. Do you have ideas for the name of this second classification? I feel like this kind of thing is easy to conceptualize, hard to bucket and actually classify, and will come down to semantics. The naming of things is such an important factor in how they are accepted and understood by the community. Is there a list of names you are considering?

Thanks for the transparency. My favorite thing about Reddit is that it is a platform that gives a voice to the many without garbling in down to the lowest common denominator (but that also happens sometimes.) My least favorite thing are the hateful subcultures that exist and feel entitled to never have their views even questioned or criticized. I appreciate that Reddit does not try to decide what is right or wrong but I also appreciate a clear stance against hate and harassment.

[–]spez[S,A] 127ポイント128ポイント  (218子コメント)

I've tried a lot of names, and none of them fit. I'm all ears. The challenge is that the content itself is very difficult to describe as well.

[–]saturnhillinger 251ポイント252ポイント x2 (44子コメント)

Just call it "opt-in content", then define opt-in content as you have above in the general FAQ.

Quick edit: the FAQ definition could look something like this- "Opt-in content is content which is clearly in conflict with common decency, yet does not merit complete removal from reddit. To see opt-in content, you must create an account and configure setting accordingly."

[–]slazenger7 22ポイント23ポイント  (2子コメント)

I like the idea of NSFA, but this is way too easily confused with NSFW. I also like the darknet connotations.

I would suggest Off the Record (OTR).

This implies that reddit does not endorse this content and that it will not be found on the main site. It also reflects the fact that users are inherently speaking anonymously, and should have the opportunity to voice their non-threatening, legal unpopular opinions authentically, honestly, and without fear of repercussions.

My two cents.

[–]ItsMeCaptainMurphy 413ポイント414ポイント  (129子コメント)

You really need to clarify

Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

because that's rather vague and is very much open to interpretation (one person's definition of harassment is not necessarily another's - is it harassment just because one person says so?). To be honest, I see nothing here that's really new to the existing content policy outside of "the common decency opt in", which I'm probably ok with - that will depend on how it's implemented and what is classified as abhorrent.

[–]spez[S,A] 13ポイント14ポイント  (118子コメント)

Right. This isn't different from what we have right now, but we really need to enforce it better.

[–]Elan-Morin-Tedronai 282ポイント283ポイント  (41子コメント)

Its just a really vague rule. /r/fatlogic continually critiques posts on social media made by fat activists, is that harassment? What about /r/subredditdrama? All they do is make fun of other redditors. /r/justneckbeardthings is pretty much devoted to picking on random fat people with beards. The line you drew is just incredibly vague.

[–]JamisonP 178ポイント179ポイント  (12子コメント)

...I think you need to figure out what it is before you start enforcing it. People cry harassment and bullying all the time now, they've realized it gets people banned and/or fired. It's abused. How do you combat that without a more fleshed out policy.

[–]twominitsturkish 34ポイント35ポイント  (4子コメント)

Yes but how will it be enforced is my question. As of now, the only enforcement I can see comes from the mods (who I presume will continue to enforce under more guidance from the admins). Will enforcement become uniform across subs, or will mods still have leeway to make their subs more or less stringent with rules?

Also, and this is really the most important thing between Reddit staying Reddit or Reddit turning into Tumblr, exactly WHAT QUALIFIES AS HARASSMENT? What is your line for what people can say or not say? Obviously a reply stating "I'll kill you, you faggot," is harassment, but what about a reply stating "OP is a faggot", in a thread about the word 'faggot' or "OP you fat fuck." in a thread about obesity? Please give us a direct answer.

[–]Warlizard 1974ポイント1975ポイント  (804子コメント)

In Ellen Pao's op-ed in the Washington Post today, she said "But to attract more mainstream audiences and bring in the big-budget advertisers, you must hide or remove the ugly."

How much of the push toward removing "ugly" elements of Reddit comes from the motivation to monetize Reddit?

EDIT: "Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)" -- This is troubling because although it seems reasonable on the surface, in practice, there are people who scream harassment when any criticism is levied against them. How will you determine what constitutes harassment?

EDIT 2: Proposed definition of harassment -- Harassment is defined as repetitive, unwanted, non-constructive contact from a person or persons whose effect is to annoy, disturb, threaten, humiliate, or torment a person, group or an organization.

[–]spez[S,A] -1196ポイント-1195ポイント  (527子コメント)

How much of the push toward removing "ugly" elements of Reddit comes from the motivation to monetize Reddit?

Zero.

edit: only on Reddit would someone pay to gild this comment so others can continue to downvote it more easily.

[–]gitykinz 248ポイント249ポイント  (120子コメント)

You have /u/Yishan and /u/ekjp directly contradicting this answer very recently in Reddit posts.

[–]absinthe-grey 85ポイント86ポイント  (7子コメント)

Ellen Pao: the trolls are winning. op ed in Washington post today.

This isn’t an easy problem to solve. To understand the challenges facing today’s Internet content platforms, layer onto that original balancing act a desire to grow audience and generate revenue. A large portion of the Internet audience enjoys edgy content and the behavior of the more extreme users; it wants to see the bad with the good, so it becomes harder to get rid of the ugly. But to attract more mainstream audiences and bring in the big-budget advertisers, you must hide or remove the ugly.

Hmm, who do I believe represents the boards monetizing strategy more? 'Switch and bait Pao' or 'damage control Steve'.

You keep talking about honesty, and providing more tools blah blah, but why dont you come out with it and honestly say you are looking to generate money from the site.

Personally I would have more respect for an organisation that is clear about its motive of balancing profit with content. I could get behind that a lot more than your transparent 'honest' we are only here for the feels approach.

Reddit wants to have its 'bastion of free speech' cake and eat it. That doesnt really fool anybody.

[–]InventorOfTrees 473ポイント474ポイント  (29子コメント)

Come on, man. If you're going to do an AMA under the guise of being completely open and honest, this kind of blatant bullshit response is just insulting. Either A) you are telling the truth, in which case I'm curious what your board members think about you wasting all this time on a movement that has angered a large majority of your current userbase with no monetary gain or goal in mind, or B) you are lying.

edit: only on Reddit would the CEO edit his comments to complain about being downvoted for his shit reply.

[–]nemoid 743ポイント744ポイント  (62子コメント)

I find that hard to believe when you say:

Similar to NSFW, another type of content that is difficult to define, but you know it when you see it, is the content that violates a common sense of decency. This classification will require a login, must be opted into, will not appear in search results or public listings, and will generate no revenue for Reddit.

[–]omcagk 23ポイント24ポイント  (0子コメント)

Can you please stop lying to us? We all know about Yishan and the venture capitalist deal. You guys made a blog post about it. We all know that your board of directors has been pushing both you and Ellen to improve user growth. You said it yourself and Ellen said it when she resigned. Stop lying and just admit, that yes, its for the money. It's really embarrassing to see you guys continue pretending like reddit is still some little grassroots community.

[–]MrCaboose96[🍰] 1942ポイント1943ポイント  (921子コメント)

Mr Huffman,

First off, thank you for doing this AMA. On Tuesday, you said:

Neither Alexis nor I created reddit to be a bastion of free speech, but rather as a place where open and honest discussion can happen[...]

In this Forbes article from 2012, Alexis responds to a question about what the founding fathers would have thought of Reddit by saying, "A bastion of free speech on the World Wide Web? I bet they would like it."

Can you please explain the disparity between these two comments?

Thank you.

EDIT: spez's answer is here.

[–]spez[S,A] -1917ポイント-1916ポイント  (763子コメント)

First, they don't conflict directly, but the common wording is unfortunate.

As I state in my post, the concept of free speech is important to us, but completely unfettered free speech can cause harm to others and additionally silence others, which is what we'll continue to address.

[–]SirYodah 1097ポイント1098ポイント  (164子コメント)

Can you please speak on why real members are still being shadowbanned, even after you claimed that they never should be?

For reference: https://np.reddit.com/r/KotakuInAction/comments/3dd954/censorship_mod_of_rneofag_shadowbanned_for_asking/

Note: I'm not involved in any of the communities represented in the link, I found it on /r/all yesterday and want to know the reason why people are still being shadowbanned.

EDIT: Thanks to the spez and the other admins that replied. Folks, please stop downvoting them if you don't like their answer. I asked why people are still being shadowbanned, and the answer is because they don't have an alternative yet, but they're working on it. It may not be the answer some of you hoped for, but it's enough for me.

[–]The_Antigamer 826ポイント827ポイント  (182子コメント)

    you know it when you see it.    

That is exactly the kind of ambiguity that will cause further controversy.

[–]spez[S,A] -739ポイント-738ポイント  (151子コメント)

[–]thepenguin259 909ポイント910ポイント  (34子コメント)

Actually....the policy "I'll know it when I see it" was modified in Memories v. Massachusetts and Miller v. California (SLAPS Test) BECAUSE it was ambiguous.....

The following comes from your wikipedia article....awkward....

This was modified in Memoirs v. Massachusetts (1966), in which obscenity was defined as anything patently offensive, appealing to prurient interest, and of no redeeming social value. Still, however, this left the ultimate decision of what constituted obscenity up to the whim of the courts, and did not provide an easily applicable standard for review by the lower courts. This changed in 1973 with Miller v. California. The Miller case established what came to be known as the Miller test, which clearly articulated that three criteria must be met for a work to be legitimately subject to state regulations. The Court recognized the inherent risk in legislating what constitutes obscenity, and necessarily limited the scope of the criteria. The criteria were:The average person, applying local community standards, looking at the work in its entirety, must find that it appeals to the prurient interest. The work must describe or depict, in an obviously offensive way, sexual conduct, or excretory functions. The work as a whole must lack "serious literary, artistic, political, or scientific values".

[–]QuinineGlow 311ポイント312ポイント  (7子コメント)

This had to be refined into the Miller Test and it's still largely unworkable as a concept.

Ironically, part of the reason why is because of the whole 'Internet Age' thing...

EDIT: and if you insist on going down this road then you've officially placed yourself in a position to dictate what things hold or lack merit- intrinsically, artistically and philosophically.

Good luck with that.

[–]cynic_alone 159ポイント160ポイント  (2子コメント)

Thanks for proving our point about how arbitrary this is. Oh, and your citation is completely wrong.

That is not in fact what the Supreme Court said. That is what 1, single justice said in 1 case, 1 time. The justice wrote a concurring opinion in which he alone said that was his standard, and notably no other justices joined in that concurrence. Justice Stewart was later criticized for the very reason that such a standard is little more than subjective, dictatorial power dressed up in judicial robes.

[–]Absinthe99 5ポイント6ポイント  (0子コメント)

Here is the problem I have with that, and with the statement as constructed here:

Similar to NSFW, another type of content that is difficult to define, but YOU know it when you see it, is the content that violates a common sense of decency. This classification will require a login, must be opted into, will not appear in search results or public listings, and will generate no revenue for Reddit.

It is actually TOO specific, and yet also TOO ambiguous. Who is the "You" in that sentence?

Most people will place themselves in that position -- and thus it becomes a completely arbitrary, entirely subjective, non-standard "standard".

Most laws, codes, etc substitute at least some sense of "reasonable" and some sense of "community" -- ergo that the content deemed so is that which a "substantial plurality, if not a majority of the members of a community, deem to violate a 'reasonable' sense of common decency."

Granted that is STILL vague and somewhat ambiguous, but at least it is not construed in the sense of any ONE INDIVIDUAL'S idea of a "common sense of decency" (which let's be real, doesn't actually exist, and you will never arrive at any consensus on).

Because otherwise, when using the word YOU, well whay you, spez, on some given day (or in some given mood) happen think is "indecent" and what *I* think is "indecent" is likely to be significantly different than what someone into BSDM thinks is "indecent" and different yet again from what someone in Podunk, Iowa thinks is "indecent", which is probably going to be significantly different than what a variety of users from Bangladesh or Indonesia, or the Inner part of Outer Mongolia think is "indecent".

Understood that this is a VERY tough thing to try to develop a "policy" on. I mean is the content of /r/watchpeopledie "indecent"? It's certainly "troubling" to the mind, some of it may actually be "gory" (while most of it is not)... yet I can easily see people thinking (and CLAIMING) that it is "indecent" and even "offensive" -- two labels I would NEVER personally attach, in fact I would tend towards other labels like "sobering" and "disillusioning", possibly "hard to watch, but important" even "useful" (because among other things it has made me more cautious as a driver & vehicle owner).

[–]AmesCG 110ポイント111ポイント  (4子コメント)

Lawyer here! This is not an example you want to emulate: the "know it when I see it" test was Justice Stewart's way of giving up on a more specific definition for obscenity, after years of the Court wrestling with it. You should try to do better.

[–]almightybob1 2358ポイント2359ポイント x5 (161子コメント)

Hello Steve.

You said the other day that "Neither Alexis nor I created reddit to be a bastion of free speech". As you probably are aware by now, reddit remembers differently. Here are just a few of my favourite quotes, articles and comments which demonstrate that reddit has in fact long trumpeted itself as just that - a bastion of free speech.

A reddit ad, uploaded March 2007:

Save freedom of speech - use reddit.com.

You, Steve Huffman, on why reddit hasn't degenerated into Digg, 2008:

I suspect that it's because we respect our users (at least the ones who return the favor), are honest, and don't censor content.

You, Steve Huffman, 2009:

We've been accused of censoring since day one, and we have a long track record of not doing so.

Then-General Manager Erik Martin, 2012:

We're a free speech site with very few exceptions (mostly personal info) and having to stomach occasional troll reddit like picsofdeadkids or morally quesitonable reddits like jailbait are part of the price of free speech on a site like this.

reddit blogpost, 2012 (this one is my favourite):

At reddit we care deeply about not imposing ours or anyone elses’ opinions on how people use the reddit platform. We are adamant about not limiting the ability to use the reddit platform even when we do not ourselves agree with or condone a specific use.

[...]

We understand that this might make some of you worried about the slippery slope from banning one specific type of content to banning other types of content. We're concerned about that too, and do not make this policy change lightly or without careful deliberation. We will tirelessly defend the right to freely share information on reddit in any way we can, even if it is offensive or discusses something that may be illegal.

Then-CEO Yishan Wong, October 2012:

We stand for free speech. This means we are not going to ban distasteful subreddits. We will not ban legal content even if we find it odious or if we personally condemn it.

reddit's core values, May 2015:

  • Allow freedom of expression.

  • Be stewards, not dictators. The community owns itself.

And of course (do I even need to add it?) Alexis Ohanian literally calling reddit a bastion of free speech, February 2012. Now with bonus Google+ post saying how proud he is of that quote!

There are many more examples, from yourself and other key figures at reddit (including Alexis), confirming that reddit has promoted itself as a centre of free speech, and that this belief was and is widespread amongst the corporate culture of reddit. If you want to read more, check out the new subreddit /r/BoFS (Bastion of Free Speech), which gathered all these examples and more in less than two days.

So now that you've had time to plan your response to these inevitable accusations of hypocrisy, my question is this: who do you think you are fooling Steve?

[–]SUSAN_IS_A_BITCH 608ポイント609ポイント  (37子コメント)

TLDR: How is the Reddit administration planning to improve their communication with users about your policies?

Over the last year there have been a number of moments where top employees have dropped the ball when it came to talking with users about Reddit's direction:

I'm sure other users have other examples, but these are the ones that have stuck with me. I intentionally left out the announcement of the /r/fatpeoplehate ban because I thought it was clear why those subreddits were being banned, though admittedly many users were confused about the new policy and it quickly became another mess.

I think this AMA is a good first step toward better communication with the user base, but only if your responses are as direct and clear as they once were.

I wish I didn't have to fear the Announcements' comments section like Jabba the Hutt's janitor fears the bathroom.

[–]TheCid 205ポイント206ポイント  (10子コメント)

Publication of someone’s private and confidential information

Can we get a clarification on what is classified under this? Gawker wrote an article outing a reddit user's real name a few years ago and they suffered no punishment from the reddit admins. Some subreddits banned all Gawker content, but this policy should have been handed down from the top.

  • Is a reddit user's real name considered private and confidential information? What if they've already validated that user name against their real identity somehow (admins, people who've done verified AMAs, etc)?

  • Is a pseudonymous user of another site's real name considered private and confidential information? What if they're an e-celebrity and this real name is already widely known (TotalBiscuit, PewDiePie, anyone with a verified Twitter account, etc.)?

  • Is the reverse of this considered private and confidential information? (Going from real name to username on reddit or another site, such as Twitter.) What if this has been validated?

  • Is "giving credit" to a cosplayer or an artist, given just a image hosted off-site (such as on imgur), considered releasing personal information?

That should be sufficient to clear up most of the gray area on personal information.


I have concerns about the "harassment/bullying" rule, as certain political groups on the internet like to claim that any and all disagreement with them is harassment, but this is a rule whose quality can only be measured in its enforcement, rather than its phrasing.


Similar to NSFW, another type of content that is difficult to define, but you know it when you see it, is the content that violates a common sense of decency. This classification will require a login, must be opted into, will not appear in search results or public listings, and will generate no revenue for Reddit.

Will it be evident when this classification has been applied to a subreddit? Will subreddit creators be allowed to label themselves under this classification? Will other subreddits be able to apply this label to comments or posts rather than deleting them outright?

[–]koproller 1045ポイント1046ポイント  (68子コメント)

Hi, First of all. Thanks for doing this AMA. On your previous AMA you said that "Ellen was not used as a scapegoat"(source).
Yet, it seems that /u/kn0thing that he was responsible for the mess in AMA (including Victoria being fired) (source).
And /u/yishan added some light on the case here and even Reddits former chief engineer Bethanye Blount (source) thought that Ellen Pao was put on a glass cliff. And when she fell, because Reddit became blind with rage for a course she didn’t pick and the firing she didn’t decided, nobody of any authority came to her aid. It felt incredibly planned.
Do you still hold the opinion that she wasn’t used as scapegoat?

[–]amaperson1234 490ポイント491ポイント  (68子コメント)

It's been said that you are going to remove the more cancerous subreddits. I'm curious as to whether ShitRedditSays will be included among this category. On the face of it, a place where reprehensible comments are pointed out, right?

It must have been two years ago now when shit hit the fan and I found a link to a thread where one redditor, clearly in a distressed state, had made a post alluding to their future suicide. Now, of course, the vast majority of responses were what you would expect from most humans. Compassionate and sincere posts offering this person help and support. Who on earth would tell a person in this condition to kill themselves? Or worse, tell them the world would be better off without them? Enter ShitRedditSays.

The comments made towards this person by a significant portion of people are among the most disturbing things I have ever seen on this site. It was the sort of thing I would expect to see on SRS, as a showcase of how awful Reddit is. So, I went to the sub to see if they were talking about it. They were, but not in the way I had expected. They were bragging. They were laughing. They were celebrating. The suicidal person in question was affiliated with the MRA sub, something that SRS greatly opposes. So much so, they brigaded the thread the person had posted in, and told them to kill themselves. Repeatedly told them. And when the person did, they were happy. Because, to them, this was a war. And anything was acceptable. Telling a suicidal person to kill themselves was perfectly fine. That is how lacking in perspective many of these people are.

Much of what was said was deleted shortly afterwards so it would not be visible anymore. Well, almost all of it. The below is only a tiny fraction of what was said. There was a lot worse.

http://i.imgur.com/ehQNU.png

http://i.imgur.com/4qMV8.png

http://i.imgur.com/nSCSV.png

I had always thought SRS was merely a sub dedicated to showcasing the darker side of this site. A way of promoting change, but nothing malicious. I messaged one of the mods about what had happened expecting them to condemn the behavior, but instead they bragged about it like some sort of psychopath. It was one of the most fucked up conversations I have ever had. Further examination of the sub and their mods clearly showed that this is a group of people who are in fact quite hateful. Many of the mods displayed blatant prejudices against various groups.

And the media doesn't show this side of SRS, for whatever reason. Possibly out of laziness or perhaps because SRS deletes the vast majority of their more shameful history. We hear about how they got rid of the disgusting Jailbait sub, something that I (and I'm sure many others) was very happy about. But we never hear about the racism, sexism or harassment that they so frequently partake in. So, on the face of it. SRS is this progressive humanitarian group that Reddit can showcase as an example of how the site is not just a cesspit of evil. Am I right?

And that's how it appears to many users of the sub too. Young teenagers in many cases. Progressive, well meaning individuals who want to highlight the unsavory things that are said throughout this site. Except we know now, that those controlling SRS and many of their more active members have much more sinister intentions than that. Clearly, they have a dangerous influence over young and impressionable people, who are unaware of these true intentions.

There is also a dark side, communities whose purpose is reprehensible, and we don’t have any obligation to support them. And we also believe that some communities currently on the platform should not be here at all.

My questions - Is the above statement genuine? Will ShitRedditSays be removed like the rest of the cancerous subreddits?

Yes or No? The answer to both questions is the same.

[–]MarshalConsensus 62ポイント63ポイント  (6子コメント)

Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

How, precisely do you intend to make this determination? Different people have different tolerances to asshattery, and some wield their "victim hood" as weapons very insincerely. I would never go to fatpeoplehate or srs or the like and imagine I would feel welcomed, but neither would I feel "intimidated into silence" because of their hate. Their echo chambers may be filled with despicable people, but I don't feel threatened by their existence.

Yet other people feel differently, to the point they feel they must silence others. And maybe they legitimately do feel threatened. But personally I feel like being offended by what anonymous people say online is beyond ridiculous. A comment carries as much weight as the effort taken to make it, and around here that effort is as close to zero as possible.

So who gets to make the determination of harassment or threatening behaviour? You? All the admins by vote? Is one person feeling like they are offended enough? 10? 100? What if equally many people think the people claiming intimidation are wrong? Having a content policy is all well and good, but unless you can describe EXACTLY how it will be applied, it's just empty sentiment.

[–]Iwasapirateonce 63ポイント64ポイント  (4子コメント)

One of the things you just do not seem to fully grasp, is that it is reddit's complete incompetence at interacting with the community that has caused majority of the damage and frustration so far.

The community has huge issues with how the sites admin mechanics completely lack any sort of transparency, how shadowbans are widely abused across the site even though you claim they should only be used for dealing with “spammers”.

Part of the reason Paos reign at reddit was so tumultuous was because reddit's communication and announcements degraded into rambling non-specific blog posts. It was a damn disgraceful way of running a community orientated company. You owe it to your users to fix these issues, to communicate with clarity, to fix the technical deficit of the site

I, and a lot of users on this site want to keep the original policy of “if it's not illegal, and it's not brigading or dissemination of personal information, it's okay, even if we do not agree with it”, but I have to say it will not be the cessation of this policy which will destroy reddit, it's the issues I list below:

Current Major issues

  • Time and time reddit's administration has shown a complete lack of ability to come up with concrete rules for what is harassment or brigading. You can't implement new policies fairly unless you have proper rules and regulations in-place.

  • Inconsistency in the application of your policies – why was FPH banned but SRS not? I am willing to bet that as a percentage of the sub population SRS engaged in more brigading activity. The way the bans were selectively handed down just reeks of partiality.

  • Shadowbanning, lack of transparency, lack of proper moderator audit logs, if content is being removed from the site, there needs to be a proper log of what is happening and why. Why can't we have an automated sub that details all the moderation actions taken by the sites admins (names could be redacted if necessary). Say, I repeatedly call Donald Trump a **** and set out to publicly humiliate him online as much as possible, is that harassment? What about if I call Anita Sarkeesian a ***** and do the same? What if I do the same to someone popular on reddit, is that harassment?. Harassment seems to have a lot of legal definitions depending on the part of the world you are in. You need to pick one and explicitly define it, and it needs to be reputation neutral (i.e apply to the popular and unpopular in equal measure (it should also have a public interest clause).

  • Lack of general respect for the community, especially the mods (The whole Victoria scandal illustrates this perfectly), again this links back to the communication problem.

  • Lack of clear and succinct communication. Lack of meaningful discourse between the site's owners and the community. No effective medium of communication (blogs suck for this btw)

TLDR: Fix the site's tools and administration structure before you start thinking about making philosophical changes to how the site is run and what version of freedom of speech you use. Doing otherwise is just another insult to users. Overall this site needs a proper intravenous dose of priority management. The management style is the main problem with reddit, not it's sometimes rumbustious/distasteful community.

Set out a proper code of ethics for reddit, and stick to it please. And for once try to make in unambiguous.

[–]SaidTheCanadian 157ポイント158ポイント  (16子コメント)

i.e. things that are actually illegal, such as copyrighted material

This is a poorly-worded idea. "Copyrighted material" is not illegal, nor should linking to "copyrighted material" be considered illegal. E.g. if I were to link to a New York Times article discussing these proposed changes, I am linking to copyrighted material. Often it's impossible to know the copyright status of something, hence the approach on this should be limited to a takedown-based approach (i.e. if someone receives a legitimate notice, then the offending content should be suspended or removed... but should the subreddit or user be banned??), however it should be up to whichever site is hosting the material. What perhaps would be the most clear-cut example of doing something illegal to violate another person's copyright is posting the full text of a copyright book as a series of comments -- that would be inappropriate.

[–]caitlinreid 100ポイント101ポイント  (11子コメント)

Anything illegal (i.e. things that are actually illegal, such as copyrighted material.

This is a huge mistake.

90% of content uploaded to imgur to be "rehosted" is infringing on copyrights. Isn't someone at reddit an investor in imgur btw?

Copyright infringement is handled via DMCA. If someone has a complaint the DMCA laws outline specific steps to take to remedy that and the person accused has a chance to respond in a clearly defined way.

In addition, removing copyright infringement at all is you, reddit, saying that you are going to moderate such content. Once you take this stance guess what? You are now actually liable for all infringing material on the entire site. That means you can (and will) get sued for real money. It will destroy reddit.

The DMCA is intended to protect service providers (reddit) because they do not police for copyrighted content. By moderating such content without legal notice (DMCA) you lose those protections.

Have fun with that I guess.

Since AMA I guess my question is how a company running a site like reddit can be so damn clueless on things that were hashed out ages ago?

[–]PROFESSIONAL_FART 342ポイント343ポイント  (21子コメント)

Copy/pasted because this question was not answered during your last AMA:

Do you have any plans to remove all the subreddit squatters?

I find it very unsettling that I've put 2 years of volunteer work into building my community and yet it can all be undone on a whim because there are squatters who outrank me in the mod list. These people are still active on reddit, just not in my specific community. The problem exists all across reddit.

At the very least, making it easier to get admins to remove these people would do a world of good.

[–]mcctaggart 204ポイント205ポイント  (38子コメント)

Spez, there has been accusations for years that a cabal of mods have sought to control a number of subreddits to suit their own political agenda. They censor posts and comments. This censorship has been documented on subreddits like r/politicalmoderation, r/subredditcancer r/moderationlog and r/undelete. You can search these subs for individual subreddit names to see the content they have removed.

r/worldnews, r/politics, r/europe, r/unitedkingdom, r/ukpolitics have all been guilty.

To give a couple of examples, r/europe bans people just for saying ISIS are inspired by the Qu'ran.

When the Tunisian terror attacks happened, the removed the thread about it saying it wasn't relevant as it happened in Africa despite the shooter targeting Europeans on holiday. This was one of those rare ocasions when it was such a big story, there was uproar on the sub so they had to relent. Many deleted stories go un-noticed by the community though.

Another excuse they will use to remove content they don't want people to see is to claim something is "low quality". Recently for example When someone posted amateur footage of African immigrants shouting that they had a right to live in Germany, they removed it and said the footage wasn't professional.

They also removed a thread about African migrants attacking tourist in Mallorca for the same reason.

Here is a thread about the time they removed all threads about Muslim migrants throwing Christians out a boat in the Med because "racists are using the story to post racism". This was another time they had to relent after so much uproar.

This "low quality" excuse has been used on r/unitedkingdom too. One time a user posted a picture he took of a poster in a public school. It read that music was haram and the work of the devil and warned students not to dance. It was a top post and then the mods removed it. They eventualy had to come up with this reason that the picture was not taken by a professional. They then added this rule to the sidebar. r/unitedkingdom has become famous for purging UKIP supporters (a political party which wants to leave the EU). This is often talked about on r/ukipparty. People are banned for no reason other than this. One banned user was recently told in a modmail that "he sounded a bit ukipppy".

This happened during the last election for Ron Paul supporters on r/politics. They would use tactics like remove posts and then an hour later re-approve them when they were much further down the queue, once someone protests or make up some excuse why it was deleted.

There was a lot of uproar when r/worldnews kept delting any Snowden stories and would not consider Glen Greenwald's The Intercept a news source. Pretty sure they did this for RT News too IIRC.

That's why there has been so much anger from some of us here and support for transparent moderation. People like u/go1dfish have been banned for trying to bring transparency to reddit. He created a bot to re-post deleted posts which some mods hated and even banned people for posting on his subs.

Reddit used to be a great forum over five years ago when conent was not curated and censored by a band of particular mods who have dug their claws into this site. Are you planning anything to make it great again and bring transparency to the moderation? As you know many of the subs who are censored now grew large when there were free-er. Some became default subs and it is extremely difficult to get uncensored alternatives off the ground and make people aware of them. Maybe alternative subs could be advertised on large or default subs so people know they have options?

[–]zaikanekochan 350ポイント351ポイント  (95子コメント)

What will the process be for determining what is “offensive” and what is not?

Will these rules be clearly laid out for users to understand?

If something is deemed “offensive,” but is consensual (such as BDSM), will it be subject to removal?

Have any specific subs already been subject to discussion of removal, and if so, have Admins decided on which subs will be eliminated?

How do you envision “open and honest discussion” happening on controversial issues if content being deemed “offensive” is removed? If “offensive” subs are removed, do you foresee an influx of now rule-breaking users flooding otherwise rule-abiding subs?

What is your favorite Metallica album, and why is it “Master of Puppets?”

There has also been mention of allowing [deleted] messages to be seen, how would these be handled in terms of containing “offensive” content?

Will anything be done regarding inactive “squatter” mods, specifically allowing their removal on large subs?

EDIT: To everyone asking why I put "offensive" in quotation marks - from the previous announcement:

There has been a lot of discussion lately —on reddit, in the news, and here internally— about reddit’s policy on the more offensive and obscene content on our platform. Our top priority at reddit is to develop a comprehensive Content Policy and the tools to enforce it.

[–]zk223 29ポイント30ポイント  (3子コメント)

For fun, I tried my hand at writing up what I think is a fair content policy. Please steal it.

Content Policy

I. Definitions

As used in this Policy:

  1. "Community" means a sub-reddit, acting by and through its registered moderators.
  2. "Submission" means a reddit self post, link post, comment, private message, or other user submitted content, and includes such additional external content that a reasonable person would consider to be incorporated by link or reference.
  3. "Submitter" means the author of a Submission.

II. Policy

  1. No Submission may contain content where the act of submitting or publishing such content would cause a violation of applicable law, or where the content clearly encourages the violation of an applicable law protecting a person from harm, fear, or harassment.
  2. No Submission may identify an individual, whether by context or explicit reference, and contain content of such a nature as to place that individual in reasonable fear that the Submitter will cause the individual to be subjected to a criminal act. "Reasonable fear," as used in the preceding sentence, is an objective standard assessed from the perspective of a similarly situated reasonable person.
  3. No Submission may contain identifying or contact information relating to a person other than the Submitter, excepting information relating to a public figure generally made available by that public figure for the purpose of receiving communication from the public. "Identifying or contact information," as used in the preceding sentence, includes any information which, by itself or in connection with other reasonably available information, would be sufficient to allow an average member of the community receiving the information to uniquely identify a person or to contact a person outside of the reddit platform.
  4. No Submission may encourage communication with any individual, other than the Submitter, for the purpose of subjecting that individual to annoyance or disruption, excepting communication to public figures on matters of public concern.
  5. No Submission may encourage a Community or its members to interfere with the operation of any other Community. Interference consists of voting, commenting, or making submissions in another Community, or in sending private messages to members of that Community, for the purpose of exerting influence or control over that Community or its members.
  6. reddit has identified certain types of content as posing an undue cost for administrators and moderators to evaluate for compliance with applicable law, despite not necessarily being in violation of the law in all instances. Therefore, no Submission may contain sexually explicit or sexually suggestive images of a person under the age of eighteen, nor may a Submission contain sexually explicit images where the persons depicted in such images are identifiable and have not consented to disclosure of the images to the public.
  7. No Community may encourage or make submissions in violation of this Content Policy, and must take prompt action to remove any Submission that violates this Content Policy. All moderators of a Community are separately capable of action creating liability for the Community.

[–]mach-2 1200ポイント1201ポイント x38 (888子コメント)

/u/spez, /u/kn0thing

Are you going to push the button?


Reddit is on its way to being one of if not the most trafficked forum in the world. It is considered the front page of the internet both literally and metaphorically. I love reddit . I have met awesome people on here. I cannot deny that fact. I have learned so much from here. I have wasted more time here than I should have yet strangely, I would not be the current man I am without Reddit. You've stated time and time again that your intent was not for a completely free speech website. Alexis has stated otherwise in the past. In your absence, the previous C.E.O(/u/yishan) upheld the "free speech" mantra.

Unfortunately, in order for freedom of speech to be in effect, there had to be interaction. That is the very essence of speech. To interact. To elucidate. To that end, it also involves the freedom of hate. There is no way to soften the reality of the situation. There's a plethora of infections on the various arms of this website. And it's spread so much so that there has to be an amputation. This is not a fix. This is the first step to recovery. There is a seriously broken and dangerous attitude being fostered under the banner of free speech. The common argument has always been about "quarantining" the hate groups to their subs. But that has failed woefully. A cross pollination of bigotry was the inevitable outcome. The inmates run the asylum. There is a festering undertow of white supremacist/anti-woman/homophobic culture ever present on this website.

The venn diagram of those clamoring for completely unmitigated "free speech" and those looking for an audience to proselytize about their hate groups is a circle. One oscillating circle that has swarmed the "front page" of your website. That is not to say every proponent of free speech is a racist/sexist bigot. That is to say that every racist/sexist bigot ON REDDIT is a proponent of unmoderated thunderdome style free speech. There is a common belief that Redditors make accounts in order to unsubscribe from the default subreddits. What does that say about the state of your website when the default communities are brimming with toxicity and hatred? What does that say about the "front page of the internet' where the toxic miasma of hatred is the very essence for which it is known for?

Day in day out, your website gets featured on media outlets for being the epicenter of some misogynistic, racist and utterly pigheaded scandal. From Anderson Cooper and the jailbait fiasco to the fappening to Ellen Pao's(/u/ekjp) most recent online lynching. This website is in a lot of trouble, packed tight in a hate fueled propellant heading at light speed towards a brick wall of an irreparable shit tier reputation. If left unchecked, your website will become a radioactive wasteland to the very celebs and advertisers you are trying to attract. But it's not too late. Only you can stop it. This is your watershed moment.

Diplomacy has failed. There is no compromise. That ship has sailed and found natives. From fatpeoplehate to coontown to the ever present talisman of "chan culture" reactionary bollocks. These groups have shown time and time again that they are willing to lash out, disrupt and poison any community they set their sights on. The pictures comparing Ellen Pao to Chairman mao or the racist rhetoric against her ethnicity did not come from outside. They came from and were propelled by the very loud crowd of bigots hiding behind the free speech proponents on this private website.

The basement of hate subs is no longer a containment. It's a lounge with a beacon. There is no "exchange of ideas/honest discussion" going on. There is only a podium for whatever crank pundit can present the warm milk to the default redditor about the encroachment of the omniscient millennial "social justice warriors/bleeding heart liberals". That's why subs like /r/shitredditsays draw more ire than literal white supremacist hubs like /r/coontown and /r/beatingniggers.

That's why this website was basically unusable when fatpeoplehate got banned. And that scab peels and bleeds over the front page anytime a person with any combination of...( Arab , Roma, Asian, Brown, Black, Female, Feminist, Gay, Indian, Muslim, Native or Progressive in some form or the other.) You say there is a very loud minority doing all this. Then it seems like it's time to take out the fucking trash. You want free flow of ideas, there's a couple of ways to go about this... Firstly


MODERATION, MODERATORS, THE FAULTS & THE DEFAULTS: The impending moderator tools are supposed to help moderators I presume? What about squatting inactive top moderators who let these default communities become the festering piles of toxicity that they are? Shouldn't the default moderators be held accountable? If you are going to tacitly advertise subreddits as the "default face of Reddit", you might want to make sure that face is acne free and not hidden behind a klan hood. If someone is going to moderate a place called /r/videos, is such a generalized community not supposed to be publicly inviting and not some springboard for the latest stormfront and anti-feminist bait video?

What happens if you create a check and balance to rejuvenate the idle mods whose sole purposes are to squat on places like /r/pics and /r/funny and /r/videos and claim to be "moderators" while doing nothing whatsoever? They demand tools from you. It's high time you demand right back. Places like /r/science are top quality precisely because they are moderated. Places like /r/pics and /r/videos become klan rallies precisely because they are not. You have to deal with those responsible for leaving the flood gates open. Why wouldnt 150,000 people feel perfectly fine to create a sub called fatpeopplehate and basically flood the "front page of the internet"?

The current defaults are over run with this toxic reactionary internet based hate groups. Places like /r/videos, /r/news, /r/pics , /r/funny and even /r/dataisbeautiful and /r/todayilearned are completely unrecognizable hubs of antebellum style 17th century phrenological debates about the degeneracy of women, gays and minorities. The recent Ellen Pao lynch mob is a perfect example of that. She was called a cunt and then Chairman Pao and then things like "ching chong" got tossed around. It's high time you drag them kicking and screaming to the 21st century or you decide to not have them as the defaults.

I'm a moderator of /r/offmychest. We banned outright bigotry and hatred against any group of protected classes. People revolted when they could no longer make threads about how much they hated blacks or muslims or women. The sub is still thriving and growing. We banned users of Fatpeoplehate and yet we are still around after a mere two days of their supposed revolt.


SHADOWBANNING , IP BANNING & CENSORSHIP A.K.A Captain Ahab and the slippery slope: Regardless of what you do today, people are going to accuse you of some form of censorship or the other. This is your house. This is your creation. They are squatters here. If they don't abide by the rules, it is your prerogative to grab them by the scuff and deport them. You have a hate based network called the "chimpire" which is a coagulation of the various hate subs on this website.

This is the Chimpire: /r/Apefrica /r/apewrangling /r/BlackCrime /r/BlackFathers /r/BlackHusbands /r/chicongo /r/ChimpireMETA /r/ChimpireOfftopic /r/chimpmusic /r/Chimpout /r/Detoilet /r/didntdonuffins /r/funnyniggers /r/gibsmedat /r/GreatApes /r/JustBlackGirlThings /r/muhdick /r/N1GGERS /r/NegroFree /r/NiggerCartoons /r/NiggerDocumentaries /r/NiggerDrama /r/NiggerFacts /r/niggerhistorymonth /r/NiggerMythology /r/NiggersGIFs /r/NiggersNews /r/niggerspics /r/niggersstories /r/NiggersTIL /r/niggervideos /r/niglets /r/RacistNiggers /r/ShitNiggersSay /r/teenapers /r/TheRacistRedPill /r/TNB /r/TrayvonMartin /r/USBlackCulture /r/WatchNiggersDie /r/WorldStarHP /r/WTFniggers

Reddit has been called a fertile ground for recruitment by literal nazi's. Coontown currently has activity rivalling stromfront which since its founding in 1995 by a former Alabama Klan leader. The Southern Poverty Law Center calls reddit “a worse black hole of violent racism than Stormfront,” documenting at least 46 active subreddits devoted to white supremacy like /r/CoonTown.


Will banning hate subs solve the problem? No. But it's a goddamn good place to start. These hateful hives have lost the privilege accorded to them by your complacence and an atlas shrugged musical version of free speech. They do not deserve to have a platform of hate in the form of Reddit. The whole world is watching you at this moment. So where do we go from here? What question do you think you will be asked other than this? The man is here and that man is you.

It used to be folk wisdom to cut the head off a snake and burn the wound to prevent it from growing back. The days of the wild west have come and gone. It was funny. The frenzy. The fiends. The fire and brimstone. You're the new sheriff. As the media would have it, the default reddit face is someone in a klan hood who hates women and supports pedophilia in some form or the other. It is an unfortunate stereotype that seems to be passed around as some sort of penance for "free speech".

It is unfair to the straight white males who have no hand in promoting such an outlook. It is unfair to the women and minorities looking for a place to have enriching discussions. It is unfair to you and your team of admins to be denigrated relentlessly. So I put it to you once more...

Steve, Alexis, are you going to push the button?

[–]MrMadcap 29ポイント30ポイント  (0子コメント)

Anything illegal (i.e. things that are actually illegal, such as copyrighted material. Discussing illegal activities, such as drug use, is not illegal)

In which jurisdiction, exactly? Need we now worry about Blasphemy laws?

Publication of someone’s private and confidential information

Does this apply to public figures, I wonder?

Anything that harasses, bullies, or abuses an individual or group of people

So no more anti-Nazi speech, then? And (more importantly) no honest, often much-needed negative criticisms of others on Reddit or off?

Similar to NSFW, another type of content that is difficult to define, but you know it when you see it, is the content that violates a common sense of decency. This classification will require a login, must be opted into, will not appear in search results or public listings, and will generate no revenue for Reddit.

This is super vague, and therefore in need of clarification. Some people might consider criticism of commonly held beliefs, or of cultural traditions to be against the "common sense of decency". (each of which are needed to allow us to grow, evolve, and improve our civilization.) For others, this may only cover repulsive imagery, such as vomit and feces. (This is the only example in which I would approve of your suggested behavior, personally.) For others still, imagery of abuse, and even further, of graphic death. (which are often needed needed to guide others toward a sense of much-needed sympathy.)

When they did, they accepted my reasoning

Or, perhaps they were simply afraid of you doing the same to them, as is the folly of the king.

[–]DickWhiskey 23ポイント24ポイント  (1子コメント)

This is not sufficient. I'll list a couple of problems that are immediately clear:

  1. You have not defined harassment, bullying, or abusing. As you probably know, the definitions for these words are wide ranging and rather contentious. Without a clear definition, any harassment rule is just a vague residual clause that can collect whatever conduct the person in charge doesn't like;
  2. Your "anything illegal" rule is likely broader than you think. Discussing drugs is not illegal, but encouraging drug use may indeed be illegal if anyone actually goes out and uses drugs after that encouragement. Additionally, the line between illegal and not illegal is very hazy when we're dealing with text - posting copyrighted material is illegal, but what about posting photos of marijuana? It's illegal to possess marijuana federally, so allowing /r/trees to continue posting pictures of marijuana plants is posting illegal activities.
  3. Also, expanding on #2 - all images are copyrighted under common law immediately. So your anti-illegal policy would actually apply to every single picture posted on reddit, unless OP actually took that photo.
  4. "Anything that incites harm or violence" is incredibly overbroad and probably applies to even more material than the "anything illegal" rule. Even common colloquial expressions can be read to "incite harm" (e.g., "John should be taken out back and shot"). Moreover, even non-violent comments can incite harm or violence (e.g., "Someone should do something about Jane"). Similar to the "harassment" rule, these problems leave the "incite harm" rule subject to vague interpretations and the whims of whoever has the ban-hammer.

But I like that you are attempting to use an actual framework. I just don't know why you are making it so difficult on yourselves by ignoring centuries of legal jurisprudence that have gone a long way to simplify these problems.

For example, the "incite harm" rule has an analogue in First Amendment jurisprudence, namely, the Brandenburg test. In Brandenburg the Supreme Court found that Ohio's statute outlawing advocating violence was unconstitutional, and they created the "clear and present danger" test. That test requires that it the advocacy present an "imminent threat of lawlessness" before it becomes subject to regulation. I don't see why a similar principle could not be used here to limit the breadth of the "incites harm" rule you've proposed.

Additionally, many cases and jurisdictions have gone to great lengths to define harassment in a way that carefully circumscribes the effect that prohibitions have on free speech. Instead of taking from those, though, it seems like you've ignored the problem with vague definitions.

EDIT:

One more - you haven't created any test to determine when it's appropriate to ban the person commenting versus when it's appropriate to ban a whole sub. At what point does brigading, harassment, bullying, etc. become a sub-wide problem?

[–]throwawaytiffany 383ポイント384ポイント  (33子コメント)

Are all DMCA takedowns posted to /r/ChillingEffects? If yes, why is this one missing? If no, why the change from the policy announced very recently? http://www.reddit.com/r/Roadcam/comments/38g72g/c/cruy2qt

[–]nixonrichard 161ポイント162ポイント  (16子コメント)

Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

So what about what we did to Comcast?

What about what we did to George Bush?

Rick Santorum?

What is the point of banning intimidating others into silence when there are entire subreddits that explicitly ban people simply for disagreement? What value would that serve unless you're going to say you can't ban individuals from subreddits for ideological differences?

[–]avoidingtheshadow 110ポイント111ポイント  (2子コメント)

Why was /u/Dancingqueen89 shadowbanned mere DAYS after your claim that shadowbans were only for spammers and not "real users"?

I'm going to presume that /r/neofag was banned for using publicly available pictures of NeoGAF users in its banner, since there was a complete lack of transparency regarding this ban. Why then, was /r/starcraftcirclejerk let off with a slap on the wrist for including the leaked nudes of a user, and subsequently spamming his inbox with username mentions in order to post said pictures? Is this not considered harassment? Why did one warrant a complete ban, and the other simply having the offending material removed?

Also, Why was /r/neogafinaction banned despite being created months before the banning of /r/neofag?

I'm hoping you'll live up to your promise of transparency /u/spez

(Disclaimer: I think Destiny is an asshole. I didn't browse NeoFAG. I care about fairness, equal application of the rules, and transparency).

[–]PhantomandaRose 28ポイント29ポイント  (0子コメント)

Hi u/spez. Thank you for taking the time to answer questions. As a user who was drawn to reddit by AMAs, this feature of the site is one of my biggest concerns. u/kn0thing went on record before you were appointed CEO that admins have no intention of monetizing r/iama. Now that you're CEO, I would appreciate if that pledge were renewed by you.

Can you please clearly answer the follow questions regarding r/iama policies/content with direct answers? I anticipate a response like "we're not monetizing, but I can't give details about board discussion" or something to that effect. I understand that is normally how things are done, but reddit leadership right now is at odds with a large chunk of its userbase, and I think more transparency is warranted here.

  1. Is reddit, inc. currently under pressure from the board of directors to monetize on r/iama? If so, how demanding is the board regarding this?

  2. Has the reddit admin team ever considered capitalizing financially on r/iama? I'm talking official plans that were scrapped all the way down to batting around informal ideas that never came to fruition. If so, how recent have discussions regarding this been? If you can't answer this because of your departure from reddit, please encourage u/kn0thing or other people who would have information to weigh in.

  3. Can you, as newly appointed CEO, pledge that reddit, inc. will not implement a monetization scheme with r/iama? I asked here and here, but got no response.

  4. In a semi-related question, u/kn0thing has explained his goal of getting celebrities to participate regularly in reddit rather than just isolated AMAs. Is the push to ban offensive content part of reddit's plan to lure celebrities to reddit? I.e., make reddit noncontroversial so celebrities can avoid potential scandal? 4a. Why don't you think it would be better to poll the userbase to see if they want to make this sacrifice for a celebrity presence? 4b. Wouldn't this give celebrities a power of ultimatum over reddit, inc. E.g., Tom Cruise wants all jokes about his sexuality deleted or else he leaves forever.

Thank you for your time.

[–]Theta_Zero 82ポイント83ポイント  (12子コメント)

Anything illegal (i.e. things that are actually illegal, such as copyrighted material. Discussing illegal activities, such as drug use, is not illegal)

Many rule-abiding subreddits, like /r/Gaming, /r/Videos, /r/Movies, and /r/Music, thrive on copyrighted multimedia content for sharing, such as movie trailers or gameplay footage. Each of these subreddits are 7 million members strong, and are some of Reddit's most popular communities. While this is not malicious use of copyrighted material for profit, this is a very blurry line; one that services such as YouTube constantly deletes content for, even on non-monetized videos.

How do you plan to tread this line without diminishing what makes these subs so popular?

[–]PleaseBuffThorn 101ポイント102ポイント  (1子コメント)

/r/neofag did nothing against the rules you placed before today and with your new policy. We did not use personal or private information, we used information that was publicly available on the forum Neogaf to make fun of and satirize the community. We have never DDosed or done anything illegal. When we tried to make a new subreddit with out the word "fag" in it, /r/NeogafInAction, you immediately banned it as well.

I'm not going to conjecture here, but something seems odd about how a niche small subreddit got banned. What is your relationship with Malka , founder of Neogaf? Something seems odd here.

[–]RamonaLittle 64ポイント65ポイント  (12子コメント)

(2 of 6. I have multiple questions, which I'm posting individually so people can upvote/downvote individually.)

Will the new policy clarify whether/when/how users are allowed to encourage suicide?

As far as the existing policy, I asked for clarification and didn't get a reply. Then I asked again and didn't get a reply. Then I asked a third time and got a reply which I think doesn't make much sense, and the admins didn't reply to my follow-up message. Here is the conversation in full:

me to /r/reddit.com/:

I just saw this screencap. LordVinyl says that telling other users to kill themselves isn't harassment. Whether or not it's harassment, I've been assuming that advocating suicide is against reddit's user agreement, which says "Keep Everyone Safe: You agree to not intentionally jeopardize the health and safety of others or yourself." and "Do Not Incite Harm: You agree not to encourage harm against people."

Can you please advise: is it a violation of reddit rules to tell another redditor to kill themself?

Thank you for your time.

Ocrasorm: It depends on the context. If someone tells a user to kill themselves on a subreddit dealing with suicidal users we will take action.

If a user is in an argument on a random subreddit and tells them to kill themselves we would not ban someone for that. Sure it is a stupid thing to say but not necessarily jeoprdizing health and safety.

me: Thanks. Just to be clear -- you're saying that "kill yourself" isn't "inciting harm" unless it's "on a subreddit dealing with suicidal users," correct?

If that's the policy, I'll abide by it, but I don't think it makes much sense. There's no reason to assume that people with suicidal feelings are only posting on suicide-related subreddits.

If a user routinely tells everyone to kill themselves (and follows up with "I'm serious" and "do it"), all over reddit, that's OK, as long as he doesn't say it in subreddits that are explicitly suicide-related, correct? If one of their targets wound up killing himself, and their parents sued reddit, you personally would testify under oath that no rules were broken?

[I never got a reply to this.]

[–]Miserable_Wrongdoer 485ポイント486ポイント  (123子コメント)

If you're thinking of banning places like /r/coontown, /r/antipozi, /r/gasthekikes etc. and other racist, homophobic, and sexist subreddits I have the following questions for you:

Will /r/atheism be banned for encouraging it's members to disrespect Islam by drawing the Prophet Muhammad and making offensive statements towards people of Faith?

Will /r/childfree be banned for being linked with the murder of a child and offensive statements towards children?

Will /r/anarchism be banned for calling for the violent overthrow of government and violence against the wealthy?

Will porn subreddits be banned for continuing the objectification of women?

Will subreddits like /r/killingwomen be banned?

These questions, /u/spez are entirely rhetorical.

The ultimate question is: If you're willing to ban some communities because their content is offensive to some people where do you draw the line?

Edit: Okay, based on your response it is subreddits that are "abusive" to "groups". What exactly constitutes said abuse to a group? Is /r/Atheism drawing the Prophet Muhammad to provoke Muslims abusive?

Further, you state that the "indecent" flag for subreddits such as /r/coontown would be based on a "I know it when I see it" basis. Do you plan on drawing a consistent and coherent policy for this eventually?

[–]316nuts 31ポイント32ポイント  (2子コメント)

How long ago do you wish reddit leadership would have dealt with this?

There have been numerous opportunities to make a positive impact on the soul and character of the reddit community. Yet at every step along the way, there have been executive decisions specifically allowing these communities to exist. Had you just stopped this nonsense years ago, reddit's growth may not be fueled with quite as much hate and anger. This could have been done back in the days of /r/jailbait when reddit was a fraction of the size and possibly a fraction of the problem.

I also take exception at a very specific point that /u/yishan made in this comment: "We tried to let you govern yourselves and you failed". While I agree in spirit of what yishan is getting at (that the community brought this upon itself), the statement is actually a fundamental mischaracterization/misunderstanding of reddit as a whole. There is no "govern yourselves". Each mod can create and do whatever they want with their subreddit. As long as they don't break the very few rules for the website, mods have absolute authority to run and manage their community as they please. There is no higher governing authority. There is no counter balance. It only takes one person to start all of this. The growth from there is also ungoverned.

You've long played into the "mods are gods" mantra, so I can't even fathom where the "We tried to let you govern yourselves and you failed" statement comes from. I have no authority over /r/funny. The userbase has no authority over /r/funny. If everyone suddenly rallies against /r/funny, nothing can be by our voices alone. /u/illuminatedwax is under direct and total control of that subreddit and can pull the plug or kick out every mod and dedicate it to himself at any time at all No one can stop that. They are the top moderator and you have given them that authority. What balance exists to check this? None. Who is to blame? Reddit? The community? Why do you include me in the blame for something I have no control over? Why do you categorically blame all reddit users for being unable to "govern themselves" when everything is operating under constructs and systems that are fundamental to how reddit exists?

Now due to years of questionable decisions your company is losing valuable employees, probably still not operating at a profit, and from the outside appears to be totally lost at sea.

With ever crisis there is the gnashing of teeth saying how wrong it was to have ignored x, y and z for many years. What else have you ignored for many years? What else is fundamentally broken? What else can't be fixed?

What is your plan? What is your five year plan? Who will be CEO in the next six months? Do you see reddit existing 10 years from now?

[–]biggmclargehuge 358ポイント359ポイント  (13子コメント)

-Things that are actually illegal, such as copyrighted material.

So 99% of the stuff on /r/pics, where people are posting copyrighted material without permission of the owners?

[–]cyberdark10 24ポイント25ポイント  (3子コメント)

I'm going to ignore your meaningless fluff and pandering, if you don't mind.

As your new content rules are rather vague, I will parrot the questions that have been undoubtedly raised by many more people. If we're to actually have a proper discussion on what is and isn't "right" for this website, then you, and the rest of Reddit's administration, need to clearly set down the rules without pleasantries and vagueness. Should you not do this, and instead purposefully leave gaps in your definitions to fit future bannings and future censored subs and posts, then all this change is useless and frankly insulting to anyone who cared about this in the first place, on either side and any.

Spam

First, I would like you to describe what constitutes spam. This may seem needless as most know what spam means, but this ties in with what I said before. Should you leave a vague definition to be used instead of a clearly defined "Is", the possibility of abuse will be clear to anyone. I suggest that in any situation where you wish to change this definition to include new types of spam or types of spam the original definition didn't include, you make all users and moderators aware via announcements.

Illegal material

Do you mean the sharing of illegal information such as child pornography and torrents? If so then I can't say that I'm against this, however, as with before, a clear definition of what this includes is necessary for the general userbase to be able to trust you.

Publication of someone’s private and confidential information

Without their consent, I assume. The publication of one's personal information with that person's consent shouldn't be punished, I'm sure you agree.

Anything that incites harm or violence against an individual or group of people

Another vague content policy. As with many others, I'm sure. I would like you to define "incite" in your own words, and "harm" in your own words. This is critical to keeping a transparent administration and instilling trust in the general userbase. Does "incite" mean "We should go do x"? Or is it more general, like "Someone should really do x", or "I wish someone would do x", or "I wouldn't mind if x happened."? What does "harm" mean? Physical harm? If so, what is this limited to? Is "We should pinch x on their cheeks." as bad as "We should torture and kill x."?

Is emotional harm included? If so, again, what is this limited to? Is unintentional emotional harm considered the same as advocating for constantly insulting a particular person? Furthermore, how do we know that the emotional harm claims will not be used to silence opposition? "You advocated for messaging me, that caused me emotional pain, therefore, you and everyone else should be banned.".

Does this policy include groups with people that advocate for the group to cause harm to someone, physically or emotionally, when the advocates are not representative of that group? If so, how do we prevent people from outright faking being part of that group in order to demonize them and get them banned? For instance, imagine a group of people who like cotton candy more than cake, if someone who likes cake more than cotton candy becomes a low level grunt in that group, and then tells others that they should beat and kill people who like cake more than cotton candy, would this cause the group to get banned?

Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

What does harass or bully mean, in your own words? Is this limited to insults? Or is a more general approach taken and instead anything that can be deemed as intimidating can be banned? To give you an example, would /r/pcmasterrace be banned for taking an aggressive stance on what gaming platform to play on?

Furthermore, as you've stated yourself, your motivation and reasoning behind this is that it stifles conversation by silencing opposition. I have a question, what exactly is the limit on this? As I'm sure you know, groups of people can be more or less timid, and thus what silences a group dramatically changes, how do you plan to account for this? By outright banning literally any form of aggression? This isn't quite enough to stop intimidation, as I'm sure you know. The mere presence of statistical facts that contradict one's viewpoint make many people feel intimidated, will this be banned? The presence of a majority makes people fear to say their mind, so will there be quotas to achieve that says "each dominant group gets equal floor time", meaning that past a certain point subreddits and people will be censored and banned until the other sides make an equal amount and achieve the same amount of supporters? Would this, itself, intimidate people into silence and cause people to outright not have a stance at all?

Sexually suggestive content featuring minors

What exactly is a minor? What definition are you using? The age of consent? If so, then which one? 13? 16? 18? What defines "suggestive"? Could a minor in a bikini be considered suggestive? What about context? If the focus of the pictures or videos are on something other than the minor(s), then will that be banned, anyway? For instance, let's say that a user creates a post to show an oddly shaped cone of ice cream, but in the background there appears to be a 12 year old in a bikini rubbing sun block lotion over themselves, would this be banned? How do discover whether or not the person in the picture is a minor, however you define that? Would you require all sexually suggestive pictures or videos, however you define that, to prove that the age of the male or female in said picture or video is of age? If so, wouldn't this then violate the policy that states that you cannot publicize a person's personal information?

Adult content must be flagged as NSFW

What defines "adult content"? For instance, would a sex ed subreddit be considered adult content and be required to tag every post with nsfw despite their primary demographic being children and teens? Does the "adult" in "adult content" mean that the content must be aimed at adults for it to be affected by this rule?

Content that violates a common sense of decency

What exactly does this mean? A common sense of decency is extremely vague. Vague to the point of meaninglessness. Anything convincingly banworthy should be covered clearly defined, otherwise you and other staff members could simply abuse the vagueness to censor and control the narrative.

Conclusion

These new restrictions are so vague that they're borderline meaningless, so vague in fact that it wouldn't be outrageous to assume that you intended it to be like that, so vague that I could justify banning literally any content on this site, so vague that they even contradict each other in many interpretations.

I'm not going to lie, before this I was uninterested, mostly because the vast majority of "changes" and announcements about this have been nothing but fluff and pandering, and there's nothing I hate more than fluff and pandering under the guise of change, but now, with this post, I'm annoyed and aggravated, which means nothing to a multi million dollar company like Reddit, I'm sure.

I'm fine with you drawing a line in the sand, but don't make the line so wide that everyone is standing on it. Point towards it clearly and say "This is our line. This is where you cannot cross.".

[–]Woahtheredudex 146ポイント147ポイント  (3子コメント)

Why was /r/NeoFag banned when there has been no evidence that it or its users ever took part in harassment? Why was a mod of the sub then shawdowbanned for asking about it? Especially when you have recently said that shawdowbans are for spammers only?

[–]steakandwhiskey 19ポイント20ポイント  (1子コメント)

I think most people would agree that the six points listed in your general guidelines are well intentioned and reasonable. However, there are two points that are a bit vague:

  • "Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)"

Who exactly is going to be the arbiter of what crosses the line? There are tons of petty slapfights that happen across all subreddits that certainly can be considered 'harassment'. Are admins going to have the be the 'nice police'? Does laughing at a fan for the misfortune of their sport team constitute bullying? Banning anything under the broad stroke of 'harassment' is a slippery slope as legitimate critique can easily be seen as falling into that category.

  • Anything illegal (i.e. things that are actually illegal, such as copyrighted material. ...

As I'm sure you're aware, a large number of the images posted across Reddit are copyrighted material from various photographers/artists. This of course is rather difficult to enforce, but researching if something is fair to share isn't really a concern when most people just mirror an image on Imgur before posting. Will there be a stricter enforcement of this policy? Or will it remain how it currently is (removed if copyright holder complains)?

[–]SirT6 17ポイント18ポイント  (1子コメント)

I think this is the one that most people will be concerned about:

Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

Prohibiting harassment, bullying and abuse sounds great in principle. Can you offer a bit more about how you will define those terms, and how you will enforce such a prohibition of content? Some examples might go a long way toward clarifying your thoughts on this issue.

The Reddit staff is rather small compared to other social/community-based websites, I can't imagine it can effectively respond on a case-by-case reporting basis. Do you have a different vision for rapidly and efficiently enforcing a prohibition on this type of content.

[–]urdle 278ポイント279ポイント  (23子コメント)

Edit: Admins sold out. Go to voat, I recommend /v/downvoat to anyone who loves free speech. .

.

.

Hello /u/spez, I thought about posting a long question about reddit's change of heart when it comes to free speech rather I have decided against it.

In your previous post, you claimed we as a community need to decide what our values are. I propose this: Honesty.

So my questions are this:

is reddit still in the red?

If so, who is paying the bills?

And are these changes prompted by them?

Thank you.

[–]Kyoraki 74ポイント75ポイント  (16子コメント)

What actions are being done about brigading, and will action only be limited to communities who's political opinions reddit admins don't agree with?

Even now, this thread is being brigaded hard by members of SRS, AMR, GamerGhazi, and SRD, calling for the heads of subreddits they don't like such as the downright innocuous KotakuInAction. Past comments by admins such as /u/kn0thing, saying the SRS isn't active enough to be worth bothering enforcing is truly unacceptable, and an outright double standard.

[–]UberAndrew 103ポイント104ポイント  (19子コメント)

I'm sure you're well aware of the Gamergate controversy.

One of the common tactics used by it's opponents is calling anyone who disagrees with them as harassers and often racists or sexists.

Despite no actual harassment, doxing, sexist or racist content there are quite a few people who have labeled the Gamergate subreddit, /r/KotakuInAction, as a harassment subreddit simply because it about Gamergate and are calling for it's banning.

If you actually visit the subreddit you'll see it's exactly what it claims to be, a subreddit for ethics in journalism and media and problems surrounding the gaming industry, but despite it not actually being a subreddit or harassment there is still worry it'll get banned simply because it's opponents have labeled it as one.

My question is whether or not you'll actually investigate subreddits to determine if they're about harassment and bullying or will simply being regarded as a problematic subreddit by certain groups be enough to ban it?

[–]krispykrackers[A] 918ポイント919ポイント  (103子コメント)

Currently if something from say, /r/fullmoviesonyoutube gets a DMCA request, we review it. If we do not host the content, we do not remove it and refer them to the hosting site for removal. Obviously, we cannot remove content that is hosted on another site.

The tricky area is if instead of just a streaming movie, the link takes you to a download of that content that puts it onto your machine. That is closer to actually hosting, and our policy has been to remove that if requested.

Copyright laws weren't really written for the internet, so the distinctions aren't always clear.

[–]donkey_democrat 50ポイント51ポイント  (5子コメント)

One of the biggest problems with restricting speech is that the rules against speech are often vague, and open the door to further restrictions. A law against hate speech could define hate speech as whatever it wants, including anti-government speech.

Specifically, I would like you to go into more detail with these points:

• Anything that incites harm or violence against an individual or group of people

• Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)

What is inciting harm defined as? Is it as simple as being against a type of person, or do they have to threaten death?

Same goes for harassing and bullying people. Would fatepeoplehate be allowed, assuming it stayed within its own bounds, or would it be banned, due to it harassing fat people?

How do subreddits protect against false flags or a few bad eggs? Was it right, in your mind, for fatpeoplehate to be banned entirely over the actions of a few users?

All of these questions need consideration. Thanks in advance.

[–]redpillschool 96ポイント97ポイント  (43子コメント)

In the past I have contacted the admin for guidelines to keep our mildly unpopular subreddit above board. The rude and short response I got was "just follow the rules" which seems to be as ambiguous as it gets, given that I was just asking what the damn rules were.. The site rules are open ended and unenforceable by mods- Mods don't have the ability to track brigading, how could we ever be responsible for stopping it?

Let's skip the excuses and call it what it is: Are the rules a red herring? Will you be removing subs you don't like, regardless of rulebreaking?

Here are some scenarios that trouble me as a moderator:

  • Users can go literally anywhere on the site and troll. It's one big forum, there are no rules against participation anywhere.
  • If those users vote or comment their opinion and also subscribe to my subreddit, it can be seen as brigading.
  • Anybody can do this, especially if they want to frame the subreddit for misconduct.
  • There is no physical way for mods to prevent users from voting- there doesn't seem to be a reason to prevent users from voting (since that is the entire purpose of reddit).
  • Despite the popular rhetoric that users "belong" to certain subreddits, most users subscribe to multiple subreddits, so telling them not to participate site-wide when you're involved in discussion from certain subreddits seems antithetical to the purpose of the site, and again, totally unenforcable.

Why would any of these actions cause an entire subreddit to be banned?


Edit: Additionally, will your administrators contact and work with the moderators when offenses occur? Or are you going to use supposed offenses as a reason to ditch subs you don't like, and keep the mods in the dark when you feel there's violating content?

[–]Liam_Banks 3ポイント4ポイント  (0子コメント)

"Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)"

Does this rule apply with equal force to all individuals and all groups? That is not a rhetorical question and I would appreciate a direct answer.

Many users and communities here have noticed a double standard at work in the implementation of reddit policies even when they seem neutral on their face. Some redditors who profess to combat bullying and harassment are among the worst offenders in bullying and harassing other redditors. They claim to do this as part of a struggle against racism and sexism, yet they choose their targets on the basis of race and sex.

Moreover, they are seemingly allowed to do things that other subreddits are forbidden to do, such as direct linking to posts whereas others are given the idea that they will be banned unless they make use of np linking. There are other examples but the point is that many on reddit have been given reason to believe that policies mean one thing when applied to certain groups that share popular political ideas, and something different when applied to everyone else.

Do you agree or disagree that this double standard has influenced reddit admins in the past?

Will there be any work done in the future to ensure that the same rules are applied in the same manner to everyone, regardless of their politics? If not, what are the limits of special treatment given to redditors who hold popular beliefs? Will the differences in treatment and the reasons for it be publicly acknowledged and explained?