35

A good Code of Conduct is a handshake agreement between users and the company. It is a document that inspires trust that situations around online conduct have been thoughtfully considered and will be handled properly, and users are free to safely engage. It is with that in mind that we are now releasing our 2023 update to the Code of Conduct. (Here's an image version, if you'd prefer that; it may not be entirely up-to-date as we make edits.)

This update has come after months of planning, research, as well as internal and external review. Our Trust & Safety, Legal, and Community teams (along with members of the Senior Leadership Team) have spent hundreds of hours crafting this document to alleviate pain points we have found with our current Code. In addition, moderators have been provided with advanced access to this document and the opportunity to propose changes. We carefully considered suggested changes, spent a considerable amount of time talking about it with them, and incorporated a good number of suggestions, sometimes copy-pasting text directly from moderators.

Why update the Code?

  • There are certain things that the current Code of Conduct does not address. The world is ever-changing and it is our responsibility to ensure the safety of users of this network.

  • Upcoming regulatory pressures from Brazil, the EU, and elsewhere demand that our content moderation practices are able to stand up to scrutiny. We do not believe that our current code delivers on those requirements.

How will it look?

We are still working with design but we plan to segment the CoC with a landing page that will include the Mission Statement, Our expectations for users, and Unacceptable behavior. See this image for how the landing page will look.

The policies bulleted on the landing page will link to a more in-depth version. We also want to incorporate site-specific guidance to the "How to ask a good question" and "How to write a good answer," portions so we hope to include links to individual site Help Centers to filter users to more detailed site-specific guidance.

A call for input

I said earlier that a good Code of Conduct is a handshake agreement between users and the company. It is a document that inspires trust that situations will be handled properly so that users are free to engage safely. For that handshake to mean something, we need input from you. We plan on going live with this update later in May, but until then, this is a very real chance for you to provide actionable feedback on the Code. Particularly, we'd like to make sure we've captured the correct expectations in the "Our expectations for users" and we're very open to improving it further.

If you have an idea of how something could be better worded, or if you have found something that we have missed, please suggest an alternate draft for that section. We will be monitoring this post and will review all feedback.

The decision to update the Code was not made lightly, but we truly believe that these changes will better serve the community and the legacy that it has built. Those who use this network - in whatever capacity, as contributor or content consumer - deserve a clear, understandable, legally compliant set of expectations, and we believe this is a step toward best-in-class practices.

A chatroom

In order to allow for broader discussion, including back-and-forth conversations, we've created this chatroom so that comments don't get unwieldy. Please feel free to join and have conversations there; we'll also pop in and be around when possible.

Feedback cutoff date

We will be processing feedback given here by May 24th. This date gives us a couple of days to wrap up changes and make this update official by the end of May. Thank you everyone for participating, providing your thoughts, and helping us make this document better. We sincerely appreciate your efforts to engage and discuss with us.

CC BY-SA 4.0
16
  • 83
    is there a list anywhere of what the substantial changes are, relative to the previous CoC? at first glance it appears to more or less be the previous CoC but using more specific terminology rather than the more... open one we had before
    – Kevin B
    May 3 at 16:37
  • 15
    @KevinB We have not created a list but the most notable changes include: 1) A more in-depth Expectations for users section. 2.) A more thorough list of Abusive behaviors 3.) The addition of Misleading information, Political content, Disruptive use of tooling, and Inauthentic usage policies
    – Bella_Blue StaffMod
    May 3 at 17:45
  • 40
    I really liked the examples under Unacceptable Behavior: "No subtle put downs or unfriendly language... Cont'd": meta.stackexchange.com/conduct. This new version is a wall of text that not everyone would read. I am not against the additions per se, but taking away those bullet points in the current version that were really helpful is a questionable decision.
    – M--
    May 3 at 18:54
  • 21
    I would like to see the Code make explicit acknowledgement of the fact that sense of humor may legitimately vary among individuals. And I would like to see it explicitly call out as misconduct attempts to weaponize the Code itself. (For example, explaining in a comment that a question is poorly posed is not misconduct merely because the questioner is a new contributor.)
    – matt
    May 4 at 13:04
  • 1
    Is this CoC for SE, or for SE family? Supporting the first interpretation, after following links that are, ultimately, part of your CoC: avoid trying to answer questions that "are not about programming as defined in the help center." Supporting the second interpretation: posting on Featured in family sites; directly in CoC, "some sites may have stricter requirements or use different policies for questions/answers/comments." May 4 at 21:19
  • 2
    @user2206636: It is network-wide. Any Help Center links will actually end up being relative links to the Help Center on the current site (it's not SO-specific). For the specific quote in your comment, the Markdown on the network-wide /help/how-to-answer Help Center page simply includes a variable that refers to the current site's topic, and then links to the current site's /help/on-topic page – so on the MSE version, that line says "not about the software that powers the Stack Exchange network" instead, and points to MSE's on-topic Help Center page.
    – V2Blast StaffMod
    May 4 at 21:25
  • 1
    @V2Blast I would reword the CoC to make that clear. Currently, part of the CoC is "ask a good question; <link> read more to write a good question." Of course, "good question" is highly ambiguous without clarification. That means that the effective CoC is different for every site. A better way to handle this would be to have a blanket CoC, with site-specific CoCs, and the blanket site CoC specifies that you need to also follow site-specific CoCs. This is especially important when we get to further parts of the CoC: is solipsism "misleading" on Phil? Is erotica dehumanizing on art sites? May 4 at 21:35
  • 7
    Just so long as we keep it short and resist the impulse to get more and more and more specific, to the point of essentially crafting legislation, like some online platforms I could name.
    – Wildcard
    May 5 at 10:26
  • 6
    The usual problem of 'the comments are unwieldy' has cropped up. If y'all have substantiative points to make, it belongs in an answer. We'll be going through and pruning the comments over the weekend and beyond. If its not Specifically involving the COC, it shouldn't be here.
    – Journeyman Geek Mod
    May 5 at 22:49
  • 6
    Why does the CoC spend so many words and details on "wrong behaviour". For example, reading about "abusive tooling" (I didn't know what that was) basically gives readers detailed instructions and even recipes (e.g. targeted voting) on how to abuse the system. That's opposite of what a Code should do in my opinion. Outline the desired behaviour in the Code, and as a moderator, if you see bad behaviour, implement your own moderation support tools to deal with that (e.g. detect and moderate suspected targeted voting).
    – Brandin
    May 16 at 9:05
  • After reading the proposed CoC through, it seems like a well intentioned attempt to provide guidelines and safeguards against potential abuse on the site while promoting the free flow of high quality information among contributors and participants. Good application by Moderators should be within the general approach that moderation is primarily “hands-off”, allowing sharing of information for the betterment of all. May 16 at 11:06
  • 4
    Not enough for a "full" answer, but I don't understand the following sentence: "Additionally, we will not allow political misinformation or widely disproven allegations against a political figure not supported by reasonable evidence to be promoted on the platform." How can something be both "widely disproven" and "supported by reasonable evidence"? 2 days ago
  • @EJoshuaS-StandwithUkraine Newton's law is supported by reasonable evidence, but widely-disproven. I don't know of any examples where this applies to an allegation, but it's not too hard to imagine one.
    – wizzwizz4
    19 hours ago
  • "Minimum Quality" requirements hidden in a Meta-question? Maybe we all can agree on the 5 points in the accepted answer. Would help newbies to have a traceable information like "Your post was deleted, because it violates M-Q-Rule no. 5" 16 hours ago
  • @SamGinrich why? we have general close reasons, site-specific close-reasons, and custom close reasons. In my current view, that's sufficient. I just want individual close-as-off-topic reason in the Data Explorer
    – starball
    16 hours ago

26 Answers 26

104

Can we get "assume good intent" back in the Code of Conduct?

Or, better still, "presume good intent": this places the emphasis on initial assumptions, and can't be construed by bad-faith actors as a fully-general "never object to anything" directive.


The landing page is an excellent introduction to the Stack Exchange system, but will new users see it? Currently, the text on sign-up says:

By clicking “Sign up”, you agree to our terms of service, privacy policy and cookie policy

where the Terms of Service says:

you affirm that you have read, understand, and agree to be bound by these Public Network Terms, as well as the Acceptable Use Policy and Privacy Policy.

None of this links to the Code of Conduct; in fact, the only place that really seems to link to it is the Help Center. I was under the impression that the Code of Conduct was binding on users, but it looks like it's only binding on moderators.

Perhaps you could add a link to the Code of Conduct to the footer, or the tour, or something? (I can't really think of anywhere it'd fit.)


The wording in the Abusive behaviour, Sensitive content and imagery, and Political content policies looks like it's adapted from US federal law, but (to a European ear) the "actual or perceived race" bit reads like the race realism common to US politics. That, or it's confusing ethnicity and race.

Perhaps you could take inspiration from French law instead?

non-public defamation of a person or groups of persons based on their origin or on their – actual or assumed – membership or non-membership of a specific ethnic group, nation, race or religion, shall be punishable by the fine laid down in respect category 4 offences. Non-public defamation of a person or group of persons based on gender, sexual orientation or disability shall be subject to the same penalty

The list mentions gender twice, too; I assume that's a copy-paste error.


The Bullying and Harassment section includes:

Content that contributes to a hostile or threatening environment, denies a person's expressed gender identity, or invalidates a person's individual experiences in a manner that causes harm.

Broadening this to "denies a person's expressed identity" (omitting "gender") would cover a few extra circumstances that, historically, we haven't had policies prohibiting.

It'd be nice to have written backing for when I smite offenders with the hammer of fury – though not necessary, of course.


This paragraph doesn't make sense (another copy-paste error, I assume):

To ensure that all users feel safe and welcome, we do not allow behaviors or content that causes or contributes to an atmosphere that excludes or marginalizes, promotes, encourages, glorifies, threatens acts of violence against, or dehumanizes another individual or community on the basis of their actual or perceived race, gender, age, sexual orientation, gender identity and expression, disability, or held religious beliefs.

There are a few grammar errors (e.g. "behaviors […] that causes: , apparently); and read literally, it doesn't really make sense. This bit:

promotes, encourages, glorifies, threatens acts of violence against,

looks like a mistake introduced in copyediting; it should be:

promotes, encourages, glorifies, or threatens acts of violence against

but I'm not sure how to integrate this into the main list; all my attempts have been unreadable. Perhaps someone from English Language and Usage might be able to manage it?


M--'s comment has saved me writing my own paragraph:

I really liked the examples under Unacceptable Behavior: "No subtle put downs or unfriendly language... Cont'd": meta.stackexchange.com/conduct. This new version is a wall of text that not everyone would read. I am not against the additions per se, but taking away those bullet points in the current version that were really helpful is a questionable decision. – M-- link

This is an important point, but (as I mentioned earlier): the old Code of Conduct was barely linked from anywhere. The main benefit of the clear, pretty, concise Code of Conduct was being able to link it in comments and have it understood. We can't do that with the new version, true…

… but we can use meta. It's where most of our other policies are, after all (e.g. ). As a bonus, that'll let us tailor our recommendations to the kind of inadvertent bad behaviour we see on individual sites, or even write individual posts for different (classes of) situations.

The new Code of Conduct is tailored more towards bad actors than any previous policy we've had; but, as it says in the introduction:

This Code of Conduct is meant to work alongside individual site policies.

The only downside is that we won't be able to lay it out as well as the current CoC, being restricted to little more than CommonMark. But, we could just link to the Internet Archive: that's not going anywhere, despite the IA's recent legal issues.


As a final note: I'd like to say how impressed I've been with the whole feedback process so far; especially with the responses of Bella_Blue, Cesar M and others. It is abundantly clear that they know what they're doing – at least as far as their main job is concerned (human exception handling, and wrangling moderators). I haven't felt ignored, despite overreaction and unprofessional conduct on my part. It really feels like we've got our CM team back.

I still don't quite understand why the new CoC needs to be long and exhaustive, but they've tried to explain it to me, and what I've understood, I've agreed with. (Something something "expectations" something "having unwritten rules" something "legislature" something "justifying moderation decisions".) I am willing to defer to their expertise on this matter.

CC BY-SA 4.0
28
  • 14
    "but (to a European ear) the "actual or perceived race" bit reads like the race realism common to US politics." I don't think "race realism" is the term you want here. That is a euphemism used, broadly speaking, by people arguing that certain claims about racial disparities (from what I have witnessed, most commonly ones regarding intelligence) are scientifically supported. I agree that Americans have an idiosyncratic folk understanding of race that would be incoherent elsewhere; in particular, they identify many people as "black" that I (as a Canadian) would not. May 3 at 20:17
  • 18
    "Broadening this to "denies a person's expressed identity" would cover a few extra circumstances that, historically, we haven't had policies prohibiting." The problem is that, by denying others the right to "deny an expressed identity", you inherently compel people to act as though they perceive that identity. It also isn't clear that "expressed" could mean anything other than "stated", in a Stack Exchange context. If someone says "I am beautiful" and I am not permitted to dispute that, that is power that the other person holds over me. May 3 at 20:20
  • 7
    "This paragraph doesn't make sense" - That single sentence is trying to do way too much. I think it could be salvaged with a structure like: "excludes, marginalizes or dehumanizes another individual or community; or promotes, encourages, glorifies, or threatens acts of violence against such individual or community, ..." - but it would be better split in to multiple sentences. May 3 at 20:24
  • 19
    @KarlKnechtel "Beautiful" isn't an identity; "identity" has a specific meaning in social science. (I am suddenly having incredible empathy for the CMs. Is this what it was like for them talking to me?) Perhaps there should be a glossary, explaining what is meant by these terms?
    – wizzwizz4
    May 3 at 20:26
  • 5
    "behaviors or content that causes" — This is technically correct as is. With a disjoint subject ("X or Y"), the verb agrees with the closest part of the subject ("Y"). See Singular or plural verb after a series connected by "or". Alternatively, we could reorder to say "content or behaviors that cause" which would sound better to everyone.
    – Laurel
    May 3 at 23:28
  • 7
    Can we object to the term "social science", BTW?
    – davidbak
    May 5 at 21:47
  • 18
    @davidbak I don't see why you can't, but I don't see why you'd want to. Every major field of social science (even economics) has legitimate, competent practitioners who practise the scientific method, so science can be – and is – carried out in those fields. Just because there a load of people who call themselves "economists" (respectively, "evolutionary psychologists", "sociologists", etc.) who make stuff up and wave their hands as "proof", that doesn't make the fields themselves illegitimate.
    – wizzwizz4
    May 5 at 23:49
  • 4
    "or invalidates a person's individual experiences in a manner that causes harm." This suffers from the snapback problem. I have seen this degrade to invalidating one side or the other.
    – Joshua
    May 8 at 18:13
  • 3
    @SomeGuy The French law does not say "actual or assumed race" (and neither does the original French). For one, it's set off as a parenthetical. For two, it applies to "membership or non membership of". For three, "race" is not at the beginning of the list (I can't explain how this makes a difference, but it feels like it does). For four, an "actual" v.s. "assumed" distinction doesn't exclude the case where actual membership of a racial category the result of a subjective determination (i.e., you're $race if you're perceived as such); i.e., it doesn't imply a race-realist "reality" to race.
    – wizzwizz4
    May 8 at 19:04
  • 6
    @wizzwizz4 We've updated the paragraph that you had flagged as not making sense. For your race realism comment, the team and I have been discussing it heavily and have a proposed alteration, but first, I want to tell a story (bear with me). I'm not from the US, and have for my entire life, been considered White in my country. In my first trip to the US, I was called "my brother in black" by the black uber driver who was taking me to the airport on the way back. (1/2)
    – Cesar M StaffMod
    May 9 at 22:31
  • 7
    @wizzwizz4 I was very confused. I very much don't look like a black person. What he was talking about, was how I'm not considered white in the US either, because of my country of origin in South America. At that point, I have to ask, what even is race - as culturally speaking it's differently understood in different places, and I think that's what you're getting at (ie: how real how it is). Therefore, the team and I, think that we should change "race" to "ethnicity" in the CoC, as that's more broadly what we identify as a protected characteristic. Does that address your concerns? (2/2)
    – Cesar M StaffMod
    May 9 at 22:33
  • 2
    @wizzwizz4 They're sometimes called "grounds" e.g. here.
    – ChrisW
    May 10 at 18:21
  • 2
    And maybre it's better to list categories, types, or "grounds" of discrimination, than of people.
    – ChrisW
    May 11 at 14:06
  • 1
    @wizzwizz4 the change to ethnicity has been made; I'll check the lists!
    – Cesar M StaffMod
    May 12 at 17:55
  • 1
    @Lambie nah. We talked about it. I know what he meant because I asked, and he explained. I'm not saying everyone feels the same, but I know what he meant because I asked. It was about considering me a person of color (in his view).
    – Cesar M StaffMod
    May 16 at 16:16
54

There are several points in the proposed policy which I find concerning in regards to curators, potentially casting legitimate curation activity in negative light.

To be clear, I must preface this with: I do not think the policy intends this to be the case. The issue is that we have had repeat complaints that are similar to points raised in the new code of conduct.

My concern is that I believe this can "give ammo" to complainers to not only complain about otherwise curation but also try to escalate. Also worth noting that these complaints ignore the "assume good intent" that the previous Code of Conduct had.

Here are the points that I find problematic and I will illustrate how:

From "Abusive behavior policy"

  • Bullying and Harassment– severe, repeated, or persistent unsolicited conduct, misuse of power or tools, cruel criticism, or attacks that target specific users or groups of people in a manner that causes harm. Content that contributes to a hostile or threatening environment, denies a person's expressed gender identity, or invalidates a person's individual experiences in a manner that causes harm.

Very regularly users equate downvotes with bullying instead of the content rating system it is supposed to be. Also, many curation activities have been labelled "harassment" in the past, like voting for closure, downvoting, commenting to ask for clarification, edits of a post to fix issues. In addition, active curators on a tag are also often accused of "repeatedly" "targetting" a user, whereas they just review most incoming questions.

Overall, a complaint like this can very broadly be used to attack users on the site. And it literally has been. Often.

  • Hostile comments – malicious, unkind, or mocking comments that provoke or insult another person, including (but not limited to) the usage of gendered cursing terms in a derogatory way.

Users have taken issue with many a comment that point out some issue with a post, or with the content (e.g., issue with the code itself), or just comments that are terse. Very often a complaint on Meta is posted and it explains how "many users attacked" the post in some fashion. And very often it is just multiple people expressing that they have trouble understanding the post. Or even offering pointers how to improve it. Yet, such comments are misinterpreted as hostile on a very regular basis.

From "Disruptive use of tooling policy"

  • Misuse of flags – using flags to harass, target, or abuse other users, or misappropriate moderator attention

At this point, it is the norm for users to complain that any moderation activity done on their content is in some form a "misuse" of the systems put into the site. Closures, downvotes, reviews, etc. have all been accused of being "misused".

  • Vandalism of content – deliberate editing to destroy or sabotage content

Some users are very protective of their content to the point that pretty much any edit they perceive as vandalism.

Here is a more concrete example: There was one case where a user added a signature to each of their posts. When it was edited out for being superfluous, the user got rather agitated and even threatened to sue the site for breaching their freedom of expression...for removing the signature.

In general, often we get complaints about edits rooted in "freedom of speech/expression" rhetoric. Thus, many users can turn to this and call any edits they do not like "vandalism".

  • Targeted votes – votes cast in succession that are non-organic in nature or not based on the quality of the content
  • Revenge downvoting – votes cast as a way to harass, target or abuse other users, so as to lower their reputation, and that are not based on the quality of the content
  • Mass downvoting – votes cast against a person or topic that are non-organic in nature or not based on the quality of the content

The same for all three - users have the habit of trying to identify a "bad actor" when they get downvotes on their post. Instead of realising their posts share similar faults. On Stack Overflow, the FAQ even includes Why shouldn't I assume I know who downvoted my post? because of how often users try to do this accuse others of "targetting" or "mass downvoting".

  • Misuse of close votes – voting to close or delete a question with repeated disregard for community consensus, or as a way to harass, target or abuse other users, or misappropriate moderator attention

There is a constant stream of complaints about any sort of closure being "inappropriate".

Even some long standing users express belief that closure should not be used even if a question has problems and cannot be answered. That doing so is wrong and instead we should wait for the author to address the issues instead of closing the question and letting the author address the issues.


Again, I want to point out that I do not believe the proposed Code of Conduct is intended to be used against curators for using close votes or downvotes or any of the other systems built into the site. However, I have seen users criticise and denounce all these activities. Very often due to drastic misunderstanding of what the sites are about and how to interact with them. In basically all cases by ignoring the "assume good intent" directive. Thus, I can already foresee that in the future they would look towards the new Code of Conduct and cherry-pick the things that sound the most like the accusation they are going to make.

I do not have a solution to this. Yet, this is the feedback I have about this proposal. At the very least, we can have the "assume good intent" back in the Code of Conduct and maybe dedicate a section to it. So when a user says "I am being bullied by close votes" there is something to point to as a response.


Also, I would really appreciate it if the company makes it clear what is acceptable in addition to what is not. Very often it feels like curators are put on the firing line against new users loaded up with arguments from off-site sources. That any moderation is "toxic", that rights are being ignored, that curation is aimed at harming the user themselves. Etc. Yet there is no single place that I know of that explains why all of this is a deeply rooted misconception.

And no, various Meta discussions, scattered FAQ entries, and the Help Centre web of articles do not count as "one single place".

CC BY-SA 4.0
22
  • 15
    While I certainly recognize the kind, & the style, of complaints you describe, I can't believe they'd be in the least abated by further documentation explaining that down-votes, closures, edits, &c. are acceptable under certain specified circumstances - of course they are, else the requisite functionality wouldn't be implemented. A few people will always take affront at any suggestion that their contributions are less than perfect, & play the victim regardless of how plausible that is to anyone else. May 4 at 16:57
  • 1
    (Needless to say, many complaints are valid, at least in part; it's natural to be somewhat miffed if your contributions aren't well received, & for that to be apparent; & reasonable people may disagree about how heavy-handed moderation ought to be.) May 4 at 16:57
  • 3
    on the point of "Vandalism of content" & the other concerns above (which I share), the document does contain a counterargument / antidote in the "Expectations for Users" section: "Curating content - Content on the network is refined through the hard work of community members, and is a vital step in our model. Community members curate content through reviews and edits to keep questions and answers clear, relevant, and up-to-date. Read more on how the community curates content." It links to the page on editing, but "curation" includes flagging, closure, and soliciting info.
    – starball
    May 4 at 20:01
  • 5
    "In addition, active curators on a tag are also often accused of "repeatedly" "targetting" a user, whereas they just review most incoming questions." Active curators should be permitted to investigate a user's history to look for more bad questions and answers - following a pattern, typically - and deal with them appropriately. Curation resources are extremely limited as is; anyone who steps into the role should be empowered to actually, you know, do things that are more likely to efficiently find issues with the site and actually curate. Anything else will just hasten SE's demise. May 5 at 0:01
  • 2
    Add to this "targetting" of users - I have a reasonal rep on a couple of sites. There are at least two users who I dare not review every entry they do because if I did I would be banned for serial downvoting, their reading comprehension and knowledge is that bad. Yes a user can feel targetting by a group because they are that unknowledgeable - but for the site this "targetting" is a good thing
    – mmmmmm
    May 5 at 9:19
  • 4
    The best thing about the previous CoC for these concerns wasn't "assume good intent" -- it was "No name-calling or personal attacks. Focus on the content, not the person." IMO it was clear that comments may (or should) criticise content, but never the author.
    – ChrisW
    May 6 at 9:48
  • 1
    @davidbak an example of a gendered cursing term is "dick". That specific one could be used, without derogation, as a nickname (if capitalized) for someone named Richard. More generally, any offensive epithet can be stated in a non-derogatory way if it is not actually being used to refer to someone. May 8 at 1:15
  • 2
    But my guess is that this wording is instead intended to allow people to keep saying "[don't] be a dick" while disallowing... well, I'd best not spell it out, because I've seen people in other places get persecuted before by moderators/staff/admins who didn't care about use-mention distinction. And no, you will definitely not be allowed to question the logic whereby, even though anatomy has nothing to do with gender, slurs that refer to anatomy are "gendered". May 8 at 1:16
  • 1
    I get all the mods being on this thread and all of them upvoting this response but, from the outside, the main current problem with SE is heavy handedness by some mods in some stacks. The mods don't need to be further coddled. They need to be reined in. Sure, you could do that by correctly applying the existing codes of conduct but at the moment it's better to give more "ammo" to "the complainers" because, at least in my case & others I've noticed, the complainers have been in the right and the other mods have been doing an impersonation of the blue wall to uphold their fellow mods.
    – lly
    May 9 at 4:03
  • 5
    @lly moderation issues are not part of the CoC. With that said, if you or anybody has objection to moderation that does not and should not give you or anybody else the right to attack users on the site in clear breach of the CoC while also trying to use the CoC for attacking them. That's just not how things work - if you want the CoC to be respected, then respect it yourself.
    – VLAZ
    May 9 at 7:51
  • 4
    You are correct that we did not intend them to be used that way, we, in fact, crafted these definitions trying to accomplish that. In the expectations for our users section, we’ve specifically called out curation and voting so that users know the community as a whole has agency over content on the platform. (+)
    – Bella_Blue StaffMod
    May 12 at 17:39
  • 1
    As far as your concerns with the definitions themselves, we wanted to provide a clearer line to explain to users that there has to be real harm committed for it to be a violation. (+)
    – Bella_Blue StaffMod
    May 12 at 17:41
  • 2
    For example, in Hostile comments, we included words like malicious to deviate between comments that are simply advocating site norms. Lastly, I would like to echo part of Cesar’s answer here because I think it is applicable: “Most of the time, elected moderators will be handling these (as they have), some other times, staff members may be. Tldr: while I agree it’s interpretation-prone, it’s less so than before, and has a more defined bar.” (+)
    – Bella_Blue StaffMod
    May 12 at 17:41
  • 6
    As he said though, if you have alternative language that you think would better serve these policies we will be happy to consider it. That said, we’re thinking that we could add “Curation activities such as voting (upvotes, downvotes, voting to close, etc) don’t typically qualify as abusive behavior” to the “Abusive behavior” point, would that address the concern?
    – Bella_Blue StaffMod
    May 12 at 17:41
  • 4
    @Bella_Blue At any rate, I believe your suggestion would be helpful.
    – VLAZ
    May 13 at 6:49
31

Essentially turning @Kevin B's comment into an answer:

Is there a list anywhere of what the substantial changes are, relative to the previous CoC? at first glance it appears to more or less be the previous CoC but using more specific terminology rather than the more... open one we had before.

In other words, please tell us what you've changed.

You've given us the main reasons:

  • There are certain things that the current Code of Conduct does not address. The world is ever-changing and it is our responsibility to ensure the safety of users of this network.

  • Upcoming regulatory pressures from Brazil, the EU, and elsewhere demand that our content moderation practices are able to stand up to scrutiny. We do not believe that our current code delivers on those requirements.

You've sorta told us why you've changed, but you haven't explicitly mentioned what's been changed. If you want us to be able to give you detailed, thoughtful feedback, it's much easier for us to do so if we have that information, particularly since many people won't be familiar with the exact copy, intent, and effect of the old one. (I certainly am not.)

Even just bullet points illustrating what you've added, removed, or changed would be fine.

@Mark Olson's comment in response to the 'too much has been changed' comments makes another good case for this:

Even if it's a complete re-write, it's hard to believe that the intended effect of the code will be completely changed. Since you made those changes for a purpose, please share with us what those purposes were (beyond the generic). And it would be very helpful to understand what specific behaviors that are currently permitted (or only vaguely prohibited) will soon be prohibited.

CC BY-SA 4.0
9
  • 16
    The new Code of Conduct is a lot larger, essentially everything has changed. I don't think you can create a meaningful diff here. May 3 at 17:53
  • 7
    Since it's an entire re-write, it's not really doable to create a diff. But if you want to compare, the current one is available here. Bella_Blue has given some of the highlights of things we added to the current version that may be bigger, but an extensive list is not possible. See her comment here.
    – Cesar M StaffMod
    May 3 at 17:56
  • 3
    Even if it's a complete re-write, it's hard to believe that the intended effect of the code will be completely changed. Since you made those changes for a purpose, please share with us what those purposes were (beyond the generic). And it would be very helpful to understand what specific behaviors that are currently permitted (or only vaguely prohibited) will soon be prohibited.
    – Mark Olson
    May 3 at 19:15
  • 3
    @MarkOlson (3) from Bella's comment above, I believe are the only things that have been added, though that doesn't necessarily mean these things have not been moderated previously. I think it's not really possible to make a list of what was permitted and now will not be, since part of the motivation for the update is to make clear some aspects of how things are already being done. May 3 at 19:22
  • 3
    For example, there's now an explicit policy against misleading information. There are trolls that post holocaust denial on History.SE and conspiracy theories about measles vaccines on MedicalScience.SE, and these posts have always been moderated away under existing CoC and site quality guidelines. So, while the policy under the header is new, Holocaust denial has always been bigotry and against the CoC. May 3 at 19:37
  • 1
    @BryanKrause That seemed pretty vague and general to me. YMMV, of course. I would like to think that I don't ever skirt near the boundary, but damned if I can tell where that boundary is!
    – Mark Olson
    May 3 at 20:11
  • 6
    I have seen Holocaust deniers attempt to present quite detailed arguments. Moderating them away denies the opportunity to show the faults in the logic, rebut the accuracy of sources etc. I also think it is likely that they genuinely believe what they're spewing, in most cases; I am not comfortable calling someone's actions "trolling" in cases where they are both sincere and on-topic. May 3 at 21:00
  • 3
    @KarlKnechtel If you want to ask a question about (notable) holocaust denial arguments, you can do so on Skeptics. While I absolutely do attempt to discuss things like this in private, I don't entirely see the need for Stack Exchange to provide a platform for such things just because they can theoretically be argued against. (Remember: the people constantly doing the arguing over and over again do get tired, too.)
    – wizzwizz4
    May 6 at 22:03
  • @KarlKnechtel regarding Holocaust trolls, it's a known issue. May 11 at 13:56
27

Misleading information - We do not allow any content that promotes false, harmful, or misleading information that carries the risk of harm to a person or group of people

Broadly speaking, we do not allow and may remove misleading information that: Is likely to significantly contribute to the risk of physical harm to a person or a group of people

And what about incorrect (or otherwise harmful) answers? I don't know about non-technological sites, but for technology sites, incorrect answers, or answers that don't give sufficient warnings for the consequences of certain actions, or answers with subtle bugs or unguarded edge-cases can carry the risk of harm to people.

Who knows what a copy-pasted bad answer with a memory leak or resource-not-properly-closed could do in the wild? Such information could certainly meet the criteria of being harmful information, and carrying the risk of harm to people.

And what about overflow bugs? You've got your classic https://en.wikipedia.org/wiki/Therac-25 (not that that was caused by Stack Overflow content, but it was caused by an overflow mistake, and I'm sure Stack Overflow has its fair share of code with unhandled overflow cases).

And what if someone were to create and propagate purposely, subtly buggy/unsafe code to mess up future LLMs? (Ex. See this Live Overflow video (probably an April fools joke)). Does this CoC have implications for such activity within the Stack Exchange network?

Here are some discussions about copy-paste cases that haven't necessarily led to harm to people, but I find worth mentioning anyway: https://twitter.com/Foone/status/1229641258370355200, https://stackoverflow.blog/2019/11/26/copying-code-from-stack-overflow-you-might-be-spreading-security-vulnerabilities/

And yet mods aren't expected to judge or moderate correctness or safety of answers, and we leave such things to the voting/comment/edit system (at least on technology sites- I don't know about non-technology sites). Assuming that won't change, it just seems to me that the wording of this new CoC could use some adjustment to deal with the dissonance on this point.


I see an edit has been made:

Content that falls under this policy can be engaged with in several ways: it may be that editing is enough, it may be that providing a factual answer (using the platform!) is enough, or it may be that it needs to be deleted. We encourage users to exercise their best judgment in how to curate and respond to this type of content and, when in doubt, to flag it or contact us.

I just wonder if it'll be clear to readers that editing should generally not conflict with the original author's intent or change the meaning of the content. Ex. we don't edit out spam from posts that try to hide the spam in other content. We just flag it as spam. I'd also have listed (down)voting.

CC BY-SA 4.0
7
  • 12
    "it just seems to me that the wording of this new CoC could use some adjustment to deal with the dissonance on this point." It's a naked hypocrisy that can't be fixed by "adjusted wording"; they need to understand that they can't actually arbitrate truth the way they'd like to, and that pretending to do so would be an exercise of political power. May 4 at 23:56
  • 3
    What about potentially correct answers that are considered "misleading" by the popular (and group-enforced) memes of the day? (I'm thinking of several COVID related things here, just as an example, though they don't come up much on SO they do on some SE boards - and w.r.t. that the behavior of certain extremely influential media/tech organizations in policing their users during that time.)
    – davidbak
    May 5 at 21:52
  • 8
    The current wording goes too far. It's important to forbid deliberately spreading misinformation. That only targets the most egregious cases and will be hard to prove but it removes the need for complicated decisions on what's true and what's not.
    – Joooeey
    May 7 at 0:23
  • 2
    @Joooeey that's a good point, but as I've already stated, even that can get muddy. How do you know a subtle bug with potential for great harm is deliberate or not? See my paragraph in this post about the linked Live Overflow video.
    – starball
    May 7 at 5:31
  • 2
    @starball IMHO, if one can't prove it's deliberate, it shouldn't be punished. I know that the notion of due process isn't SO Inc.'s forte but it ought to be.
    – Joooeey
    May 7 at 6:20
  • 1
    Community curation is always the first line of defense against misleading information. That includes edits, downvotes, and deletion/closure as necessary. For Stack Overflow specifically, I believe the community is well equipped to continue to deal with harmful misleading information as they have in the past. We just made an edit and added this language about curation into the CoC itself do you think this will be helpful?
    – Bella_Blue StaffMod
    May 12 at 17:45
  • @Bella_Blue thanks! see my edit.
    – starball
    May 12 at 20:22
22

Change the "Political content" header to make it clear that only some political content is not allowed.

Currently all bold headers under "Unacceptable behavior" provide good summaries of unacceptable behavior. You don't really need to read beyond the headings to understand the point. ("Abusive behavior", "Sensitive content and imagery", etc.)

However, this is not true for "Political content". The heading makes it look like it's not allowed in any form, but the description clarifies that only some forms of political content are not allowed.

I suggest to change the heading to make that immediately clear. Perhaps "Harmful political content", or something similar. (I'm not a native speaker, so not sure what adjective would fit there.)

CC BY-SA 4.0
10
  • 2
    Maybe "Political content policy" as a title? That would indicate there's more to it.
    – terdon
    May 3 at 18:57
  • 2
    @terdon That wouldn't fit well with the adjacent titles. There are things like: Abusive behavior, Sensitive content and imagery, etc, next to it. So, specific kinds of undesired behavior. May 3 at 18:58
  • 19
    Political content shouldn't be allowed. Either you're allowed to share opinions or you aren't; selectively filtering them is itself an exercise of political power. Politics are irrelevant to almost every Stack (except, like, politics.SE, but man is that a cesspool). May 3 at 19:30
  • 2
    @KarlKnechtel I suggest posting that as a separate answer. May 3 at 19:57
  • 2
    @HolyBlackCat it was a substantial part of my initial attempt at an answer, but people here apparently don't like my supposed "conspiratorial" thinking reading the potential for bias into words that I've seen used for biased ends many times in the past. So I'll let someone else try to champion the cause today. May 3 at 20:02
  • 8
    @KarlKnechtel So how about all the users who have modified their own usernames to append "stands with Ukraine" (or "Monica")?
    – matt
    May 4 at 12:57
  • 5
    Let me be clear about this. If "only some political content is allowed", I am gone. (Although I am probably gone anyway, given the recent statements about AI.) I do not care the tiniest bit which content you want to filter, no matter how violent or extremist, no matter how strongly I personally disagree with it: if you propose to take a politically biased approach to content moderation, your action is thereby worse than any words those extremists may say. There is no greater, no more fundamental rejection of liberalism than that. May 8 at 1:24
  • @KarlKnechtel Very confused by the intelligent clarity of the 8 May comment in combination with the fact that the 20:02 3 May comment that suggests moderation overreach or failure to assume good faith as an editor. Did you really only mean that what's "conspiratorial" is knowing that moderators given the power to enforce their own political beliefs will abuse that power? You expressed that too confusingly, esp. given that isn't a conspiracy. It's an absolute certainty even for well meaning people (cf how much early covid guidance ppl took too religiously & needed subsequent walking back).
    – lly
    May 9 at 4:16
  • 2
    @lly That was a reference to my initial attempt at an answer here, which I subsequently deleted after a heated comment exchange. If you didn't see that, it's better not explained at this point. May 9 at 4:43
  • 2
    @HolyBlackCat we've made the change as proposed
    – Cesar M StaffMod
    May 9 at 22:34
18

First, I wanted to say I'm glad you guys are taking the time to give the CoC a face-lift. The revised CoC appears to be quite detailed and has a lot of expanded information in the "Policies hyperlinked in the CoC" section that is much more exhaustive than the current CoC, and frequently links to your /legal page on Stack Overflow, which is nothing but helpful. I'm also happy to see that you incorporated the feedback of not just close to everybody at the company, but also the entire community (moderators first, then the rest of us as of this post).

With those thoughts out of the way, I wanted to ask you to elaborate a bit on what the largest pain points of the current CoC you intended to fix with these changes are. You mentioned:

There are certain things that the current Code of Conduct does not address. The world is ever-changing and it is our responsibility to ensure the safety of users of this network.

And while I agree, I wanted to know what topics the company identified as specifically needing added or expanded upon. My goal in asking is to subject those particular portions to more scrutiny to ensure that the changes are hitting the nail on the head and can stand the test of time.

CC BY-SA 4.0
5
  • 11
    One of the main points is we often find ourselves in situations where that the current Code is too general to justify content moderation actions that have been taken so we have made it more specific to better identify what behaviors we do not allow. We’ve also officially included expectations around behavior such as voting/sockpuppeting.
    – Bella_Blue StaffMod
    May 3 at 17:46
  • 8
    The other factor at play here is the changing regulatory environment and the reality that our current Code may not stand up to the scrutiny of investigation, as insufficiently detailed, etc.
    – Philippe StaffMod
    May 4 at 16:54
  • @Bella_Blue Do the mods think the CoC is too general? That's not a sentiment I've ever seen. May 6 at 4:17
  • @wizzwizz4 If the mods think the current CoC is fit for purpose then maybe it actually is? CMs don't have to make it easier for people to appeal. May 6 at 22:09
  • 2
    @curiousdannii I don't see how the CoC change makes it easier to make appeals; that's still a matter of typing text into the contact form. And if it makes it easier for CMs to implement a transparent "due process" for complaints, I'd say that's a benefit. (Can you achieve the function of the current CoC with a faq meta post?)
    – wizzwizz4
    May 6 at 22:29
17

My Miscellaneous Thoughts, Questions, and Suggestions

I'm very pleasantly surprised to see links to the Help Center pages on How to Ask and How to Answer in the "Our expectations for users" section. I'm glad you're shining more light on the Help Center pages, and explicitly wording those guidelines as expectations. Now the problem is just that a lot of people won't ever read the CoC page :P

Suggestion: Put the kindness point at the top of the "expectations" list

For the "Our expectations for users", I'd like to see the point on "Engaging with users" at the top of the bullet list. "No matter where you engage on the network with your peers, we expect all users to treat one another with kindness and respect." should be underlying every other point.

Also, nit: The current CoC page has in big words, "kindness, collaboration, and mutual respect.", but in the new draft, the starting section says "rooted in cooperation and mutual respect" (where's "kindness"?).

Suggestion: Bring back the point on avoiding sarcasm

The current CoC page says "Avoid sarcasm and be careful with jokes", but I don't see any such similar statement in the new draft. Why was that removed? I for one am very glad that this community is currently one where sarcasm is at least stated as something to be avoided. This might fit under the "Bullying and Harassment", or "Hostile comments" sections (I think probably the former), or go under a dedicated bullet point in the "Abusive behavior policy" section.

Suggestion: Bring back the bad/good conduct comparison examples

I agree with what others have stated about the loss of the examples in the new draft about unacceptable comments. I think those examples are very helpful because they're concrete and down-to-earth. I'd like those to stay or carry over in some form.

I also liked having the point that said "No name-calling or personal attacks", which- forgive me if I'm wrong, but- I don't see explicitly covered in the new draft.

Suggestion: Bring back the "Enforcement" section

Why is there no section on Enforcement (that mentions steps like "Warning", "Account Suspension", and "Account Explusion") like there is in the current page?

Suggestion: Concretely explain the meaning of "non-organic" voting

I think "non-organic" might need some more explanation of what it means in the section on bad voting behaviours. My general understanding is that it means "voting on things you wouldn't come across when using the site like an average user", but that's an incredibly vague (and probably poor) definition. It would be nice to pin it down to something or narrow it down to something more concrete.

Misc Suggestions on Links and Wording

In the bullet point on "Sexually Explicit Material", I'd suggest linking to the page in the Terms of Service's section on the Acceptable Use Policy for its related statement on suspensions.

In the section on "Disruptive use of tooling policy", it could be useful to link to the Terms of Service's section on the Acceptable Use Policy for its statement that such violations will result in terminated accounts and blocked addresses.

Can "Content glorifying harm" be changed to "Content that glorifies harm"? The first time I read it, my brain accidentally misread read it as "Content-glorifying harm" (harm that glorifies content) :P

In the "Inauthentic usage policy"'s bullet on multiple accounts, I think it could be nice to link to What are the rules governing multiple accounts (i.e. sockpuppets)?.

Question: Why the non-direct link to tips on engaging with users contemplating self-harm?

Is it intentional that

If you would like to engage with a user in crisis, you may want to read this answer for some helpful tips.

links to a page that then links to https://meta.stackexchange.com/a/340597/997587 ? Or did you actually mean to link directly to that? It's just a bit confusing to get linked to something that doesn't seem to be what the link text seemed to indicate, and have to look for another link in the linked post.

Question: How strict is this CoC on spam and sexually explicit content in user profiles?

Usernames and profiles
While we encourage users to express themselves in their profiles, all user profiles in their entirety are subject to the Code of Conduct and all policies outlined or incorporated therein.

And yet,

Side comment on our optics

In my time on reddit, I've seen a lot of posts that dump on the Stack Overflow community and paint it as enjoying behaviours that break a lot of these rules (Ex. characterizing users as making statements like "that's a stupid question and you're stupid for asking it"). I find it really sad that we've come to have such a reputation / left such an impression. (these threads are pretty easy to find, and get re-hashed often. Just google "site:reddit.com stackoverflow toxic" and use the tools section to limit to results within the past week or month) I continue to make comments in such threads that clarify that we have our Code of Conduct, and that such behaviours are not tolerated by the community as a whole.

See also my other answer posts
CC BY-SA 4.0
5
  • 1
    The user profiles policy is new. As is the misinformation policy, and much of the political content policy. The rest is, however, mere clarification of existing policies.
    – wizzwizz4
    May 3 at 23:45
  • 2
    Do you have suggested wording for the "sarcasm" thing? I tried to get that into the new version during the moderator review period, but couldn't work out where it would fit.
    – wizzwizz4
    May 3 at 23:49
  • 7
    @wizzwizz4 I'm not at all surprised by the fact that "misinformation" and "political content" policies are being added at this point. I've noticed that they're getting added in similar places for many other Internet services - especially ones operated by American companies. I've also noticed that they tend to focus on object-level examples that seem very particularly pointed at a very specific, US-centric political tendency that is especially disliked in the parts of the US that tend to house the HQs of such companies. Which is part of why I express so much skepticism about impartiality. May 5 at 0:16
  • 1
    We have made some edits in response to your feedback. Thank you so much for your suggestions. Check them out and see if they cover your concerns. :)
    – Bella_Blue StaffMod
    May 12 at 17:47
  • @Bella_Blue thanks! I added status-completed tags in my post for those that I saw addressed. Let me know if there's anything I might have missed looking at. (I saw two).
    – starball
    May 12 at 20:31
16

First, I want to thank you for taking this on. I've watched the discussion surrounding every version of these sites' "code" since it was called "The FAQ", and... It's been a shit-show every single time. At some point I realized that it has to be; if discussing such a code wasn't chaotic, it would mean we didn't care; it would all be for nothing.

With that said...

Broad observations

I like that it's short. In particular, it shares something in common with that first "FAQ": the most important bits can all fit on one side of a sheet of 8.5"x11" paper. I... Don't think we've really had that in a lot of years. Not sure if anyone will ever be moved to print this out and hang it on their wall while typing here, but... If anyone did, it might actually not be a waste of wall-space.

I like that there are links to meta posts in tricky situations. For this to be a code, it must be adopted - and daily executed - by all of us. A code isn't static, etched in stone - it's living, etched in our hearts and shaped by our hands. We have callouses from where it rubs on us, and it deserves the same. There is no king here; we have no use for a code that is written in stone.

I'm somewhat annoyed by the frequency of the word "user". This is pedantry for sure, but... "User" is generally either shorthand for "user account" (a set of information used by the software to manage access to the software) or "person using the site" - distinguishing between these two uses is usually possible based on the context (I'll note below where it wasn't), but... It feels a little bit lazy when used too often. I counted 33 instances of "user" or "users" in the draft, which was enough that by the time I reached the end of the document I'd started counting. For comparison, that's exactly as many occurrences of "user" as of "people", "person" and "post" put together, and precisely 33 more uses of "user" than of "pimpmobile".

Observations on specific sections

Abusive behavior policy

cruel criticism, or attacks that target specific users or groups of people in a manner that causes harm.

Pretty sure once someone clears the "causes harm" threshold, we aren't gonna be too fussed about whether or not the target is actually using the site or has an account - IOW, harassing someone into leaving the site isn't a loophole here. Just say "people or groups of people".

**Dangerous speech **– any form of expression (e.g. text, images, or speech) that represents rhetoric that demonizes or denigrates a group of people in a way that depicts them as threats so serious that violence against them becomes acceptable or necessary; rhetoric that increases the risk of violence being condoned or committed against a particular group.

This is a long and complicated sentence, but I think you're aiming for a prohibition on what is sometimes called "incitement". While I generally appreciate the brevity of this document, in this paragraph I felt like it got in the way. Recommend either spending a bit more room breaking it down (or at least breaking the paragraph into multiple sentences...) or link out to something here or in the help center that can lend clarity.

Dehumanization – depriving individuals or groups of people of their perceived humanity and dignity, for example, by comparing humans, groups, or their stated or perceived behaviors in a derogatory manner with non-human entities such as animals perceived as inferior, bacteria, viruses, microbes, diseases, infections, filth, and other qualifiers.

This long sentence, OTOH, I thoroughly enjoyed. No accounting for taste!

Self-harm and suicide

I'm... Not happy to see this here. But I am glad that you included it. There are people I still think about on a regular basis who took their own lives after sharing some of themselves with us on these sites, who never exactly reached out for help but maybe... Tried and weren't understood. Then again, out of all the emails I sent or saw sent to folks who were overtly suicidal in posts here, I'm not sure I could point to one that seemed like a clear win.

IOW... I'm not sure any of the responses or practices discussed in this section do a bit of good, but I don't have any better ideas and if seeing that the section even exists maybe helps someone... Then it's worth having it.

Political content policy

as long as they do not otherwise violate the Code of Conduct and do not contain insulting language directed at individuals.

Ok, yeah, "individual" is another reasonably unambiguous word for "person" - could also use that in the "Abusive behavior policy" instead of "user"...

Hint. Hint.

Misleading information policy

This is all well and good, but also a huge missed opportunity to note that misleading information may also be edited. Like, is very, very likely to be edited. Will almost certainly be edited. Unless, like, it's abundantly clear that the author is trolling and cares nothing for the truth.

I mention this because... Well, editing is probably my favorite feature here. I like editing, and also I like that folks can edit my posts when they find them misleading. Which they frequently do. And yet, it remains a feature and a behavior that seems to trip up folks casually interacting with these sites.

Actually... I'm gonna end on that note. The comments I had on the next two sections - "Disruptive use of tooling policy" and "Inauthentic usage policy" - were pretty nitpicky. But emphasizing the positive ways in which editing is used here to actively combat misinformation is important. Let's please do more of that!

CC BY-SA 4.0
6
  • 12
    "were pretty nitpicky" .... this is meta and this is what we do after all :D
    – Journeyman Geek Mod
    May 5 at 1:50
  • 6
    Exercise: search and replace "user" with "pimpmobile" before submitting a document for review. If no comments arise, you're not saying anything about actual "users". Also your reviewers are dead.
    – Shog9
    May 5 at 1:55
  • 5
    The pimpmobile rule is my new favorite.
    – Philippe StaffMod
    May 5 at 12:40
  • 7
    "user" is a legacy term that's been around forever and personally one I've never loved for the reasons you've outlined here. I think I'm going to just use pimpmobile in all of my drafts randomly to see if reviewers are really reading my work or not ;)
    – Rosie StaffMod
    May 5 at 13:00
  • 11
    "I'm totally going to remember to remove the phrase pimpmobile..." This is how errors creep into production. Hilarious, hilarious errors. What I'm saying is: please do this. May 5 at 17:32
  • 2
    Thanks for the points! We've made edits to the Misleading information policy (adding clarity around edits/curation). As for the Dangerous speech point, incitement is part of it, but it doesn't cover it all. We added credit to the project where we got the inspiration for that point so people can go read it if they so wish.
    – Cesar M StaffMod
    May 12 at 17:48
16

Let's suppose for a second your code of conduct wasn't a unilaterally imposed totalitarian bulwark, but a "handshake agreement between users and the company".

How, then, would we the users be able to hold the company (and members of its management) accountable for breaking this agreement? Specifically,

  • Hostile comments - Remember when SE Inc.'s Director of Public Q&A declared company critics are part of the problem and need to leave the network? - those were definitely hostile and derogatory comments. What mechanism did we have to take her to task? None. And what mechanism will we have with this brand new shiny CoC? Again, none.

  • Bullying and Harassment - The company, via its appointed moderators, has, on at least a few occasions, engaged in bullying of users critical of its policies. A prominent case was that of Monica Cellio. To this day, the company holds on to its claim that its actions were somehow justified and nobody has answered for that affair. Of course, such actions are usually hidden from the eyes of most users unless others somehow start up a conversation about it; otherwise - it's secret punishments; penal actions against users are not made public (let alone with access to relevant evidence or adjudicative decisions).

Also, I don't know about you, but where I come from, an agreement requires both parties to, well, agree. And that document is the opposite of agreeable.


... but of course, this is all just a rhetorical exercise. Your ideological preening is tiring. You're just going to continue to do what you want, and we'll just have to hope not to become the focus of attention for some weird US-subcultural sensibility of yours. Actually, I'm worried about what exactly your "pain points" this time are, and whether we're going to have more mistreatment of people with these new excuses like last time.


As a service to my fellow users, here is some music to set the mood for reading the CoC.

CC BY-SA 4.0
14
  • 4
    The "pain points" are clearly explained, both in the question and in comments: there are new regulatory concerns, and they want to be able to justify content moderation decisions. Is there something that makes you suspect these aren't their intentions, or think the new CoC encourages mistreatment of people?
    – wizzwizz4
    May 7 at 16:52
  • 7
    Are there any parts of the CoC you actually object to? Because it doesn't look like you're addressing it. The events of 2019 were bad, but this is almost entirely unrelated to them.
    – wizzwizz4
    May 7 at 17:49
  • 4
    @wizzwizz4: Also, it's the opposite of "clearly explained". Other websites - including web forums and such - are not undergoing anything like this, so what's special about SE? Also, what regulations? What specifically in the current CoC contradicts said regulation? etc.
    – einpoklum
    May 8 at 13:07
  • @einpoklum Here's some stuff about the UK's Online Safety Bill. If you don't want to read all that, Your compliance obligations under the UK’s Online Safety Bill; or, welcome to hell is out of date, but is suggestive of the kind of stuff that's coming in. I know little about the proposed regulations in the countries the CMs have explicitly mentioned.
    – wizzwizz4
    May 8 at 16:47
  • 2
    @einpoklum I am asking seriously, because (as I intended to mention at the end of my answer) I kicked up a huge fuss in a very confrontational manner, and I was listened to, and they changed stuff in response. Sure, there's lots of stuff they haven't changed, with no reason given (which, given the nature of those issues, I suspect might have to do with secret legal arguments they're keeping in their back pocket), but they're listening. They've even got a chat room open, if you'd prefer to discuss stuff.
    – wizzwizz4
    May 8 at 16:53
  • 3
    @wizzwizz4: Well, you wanted them to insert more punishable offenses into the CoC. My opposition is in the other direction: I want to undermine the premise of the CoC, and introduce checks on the company rather than the users. About the UK safety bill: 1. SE didn't mention it explicitly, but ok. 2. Why does it apply to SE, which is a US company without activity in the UK? 3. From the first diagram, it seems SE should be "subject to exemptions" 4. Will read the blog post.
    – einpoklum
    May 8 at 18:50
  • 1
    @einpoklum 1. True – but I can't comment on things I don't understand. If you want details, you can ask. 2. Stack Overflow has a London office; also, these laws tend to apply even if you're not based in the country. 3. What diagram?
    – wizzwizz4
    May 8 at 19:09
  • 1
    @wizzwizz4: 2. Oh. 3. This diagram
    – einpoklum
    May 8 at 19:30
  • @einpoklum The exemptions under Schedule 1 (page 178 of the Online Safety Bill) don't apply to Stack Exchange.
    – wizzwizz4
    May 8 at 19:43
  • 2
    @einpoklum There's now public evidence that they're listening and making changes – even to some stuff I thought they wouldn't be changing. Please consider posting a new answer with your suggestions! Preferably by this Friday; a large company's bureaucracy is slow, and it's no good waiting 'till the last minute.
    – wizzwizz4
    May 10 at 6:02
  • 1
    @wizzwizz4: (Thought I'd answered this, apparently I haven't) - they're making changes mostly in the wrong direction, and not fixing what's fundamentally broken. You may also want to read this question from a while back.
    – einpoklum
    May 13 at 9:58
  • 2
    I read that at the time: the very first answer is about them not consulting the community, which they are now doing.
    – wizzwizz4
    May 13 at 14:13
  • 3
    @wizzwizz4: Again, no, they aren't. It's like when they "apologized" for the treatment of Ms. Cellio. They are doing the opposite of what the community wanted and wants them to. But they have now just gotten somewhat better at astro-turfing and presenting their actions as the result of consultation than before. The policies are still basically the same.
    – einpoklum
    May 13 at 23:12
  • Re: "Hostile Comments", unfortunately the code of conduct does not extend to what happens at Twitter. Some CoCs do cover a concept such as Private harassment, but it's a bit complicated to imagine endorsing someone else's derogatory terms at Twitter serving as a reason for which disciplinary action on Stack Overflow should ensue.
    – E_net4
    5 hours ago
15

We plan on going live with this update later in May, but until then, this is a very real chance for you to provide actionable feedback on the Code.

It's all a bit rushed, isn't it? May just started and I have this sneaking suspicion that y'all are implying that this will be wrapped up within the next two weeks. That doesn't leave a lot of time for public feedback or discourse, and if you're expecting that I shake your hand on this, I'm going to want to go through this with a fine tooth comb and give you guys a chance to iterate and improve on it.

Don't just bowl us over on this one. Again. Please. I'm begging you.

Could we get at least a month guaranteed to have that debate and discourse?

Particularly, we'd like to make sure we've captured the correct expectations in the "Our expectations for users" and we're very open to improving it further.

Well...if I don't get enough time to comb over everything at least I'll try to contribute here.


I originally had a comment on the "definition of tools" section as it associated with Bullying and Harassment, but I feel like I'm happy/comfortable with the provided definition.


In the Bullying and Harassment section, I think there's a weasel word in this expression. It's not that I disagree, but people interpret "cruelty" in different ways.

Bullying and Harassment – severe, repeated, or persistent unsolicited conduct, misuse of power or tools, cruel criticism, or attacks that target specific users or groups of people in a manner that causes harm. Content that contributes to a hostile or threatening environment, denies a person's expressed gender identity, or invalidates a person's individual experiences in a manner that causes harm.

Someone who's curt with their comments may come across as cruel to another person. How do you plan to adjudicate those situations? Does the CoC give users a blanket ability to just... claim that they are victim to this clause and demand retribution when the person making the comment isn't being cruel, they're just being blunt?


This caught my eye:

This Code of Conduct is meant to work alongside individual site policies. Sites and Chatrooms may choose more restrictive policies for their content than what is allowed here, particularly around what is on-topic or off-topic.

Does this imply that chat can not be less restrictive (within reason)? For instance, permitting profanity/swearing in chat when on the main site it's more aimed towards semi-casual office speech? Or is chat going to be held to a higher universal standard?


Overall though on a first pass, I feel pretty OK with what's here. This might change with some of the revisions made or other suggestions incorporated, but I do think that this establishes a lot of the norms that we've held already and does away with the wild and self-defeating polarization that the most recent CoC revision brought in.

CC BY-SA 4.0
16
  • 12
    Responding to the claim that this has been rushed: a small group of moderators were first made aware of this update in early February. The rest of the moderation team was informed in mid-March(ish). At every step of the process, the staff at SE have made it relatively clear that they are not looking to make any significant changes, that this is going to be the policy going forward, and that the only changes they are planning to make are, essentially, copy-editing. May 3 at 19:40
  • 12
    @XanderHenderson: So the impact of this is that I'm basically wasting my time? As in, if I'm not given an opportunity to talk about this CoC and what I do feel and don't feel make sense in an attempt to work with them, I'm basically SOL? If that's the case, why do this exercise at all? Maybe I shouldn't have got my hopes up...
    – Makoto
    May 3 at 19:43
  • 14
    At the risk of giving my cynicism free rein, yeah, that's basically the sum of it. Many of us have had this complaint throughout the process. May 3 at 19:46
  • 12
    @XanderHenderson I'd actually dispute that such was the case, there are a couple things we said we absolutely would not include, but we never said we would not make any significant changes. We did. Some lines are a complete 180 on what they said before, as a result of direct feedback, other lines we copy-pasted directly from mods who drafted it into the thing, without any change. One of the things we said we absolutely wouldn't do was a thing you wanted, and I recognize that. But I think it's not true to state that we refused to make any significant changes.
    – Cesar M StaffMod
    May 3 at 19:46
  • 10
    @Makoto to answer your first hard question (that's easy to answer definitively): a month may not be doable (as in, the plan is to have it live by the end of May, a month would require it to slip into June), but it's likely that it will be longer than two weeks, it will sort of depend on what state the discussion/doc/points are in two weeks. For the cruel criticism point, we'll get back to you.
    – Cesar M StaffMod
    May 3 at 19:50
  • 8
    As for the less restrictive, correct, chat can't be less restrictive than what the policies there say. It doesn't mean that it can't be less restrictive than the main sites. Rather, the main sites can be more restrictive than the CoC (and in several cases will be). So it's sort of the CoC is the baseline, main sites can be more restrictive than that, and so can chatrooms. Unless I'm misremembering, we don't mention profanity/swearing as entirely forbidden in the CoC, so that's a case where a main site may choose to be more restrictive.
    – Cesar M StaffMod
    May 3 at 19:52
  • 4
    "If that's the case, why do this exercise at all?" Same reason as when the CEO posted about AI integration. Or anything else that gets handed over the wall. May 3 at 20:31
  • 6
    @CesarM To be clear: consequently, nowhere on the Stack Exchange network (including in chat) would I be permitted to promote "misinformation" in an argument with someone else; and the determination of what is "misinformation" would be made in accordance with the current mainstream political consensus of what is supposedly "widely disproven"? And the policy furthermore sees fit to call out specific examples of this that are particular to American culture war? May 3 at 20:35
  • 1
    @Makoto: Yeah, it's been sort of an (admittedly inconsistent) general guideline based largely on stuff like this MSE FAQ, which has an answer by Jeff Atwood saying that using expletives is "generally" not allowed on SE sites (the reasoning I've sometimes seen cited for this is to keep the site/network from being blocked by certain monitoring software and the like). I think some previous version of SE's guidance did link to that FAQ... This CoC update doesn't do so, however.
    – V2Blast StaffMod
    May 3 at 21:14
  • 3
  • 5
    @Makoto I think it’s worth digging up some history for this one, first there was “be nice”, then, there was “no unfriendly language”, now we have more policies, one of them, “cruel criticism”, as we’ve changed (and I can speak particularly to this change), the intention is to leave less room for interpretation (as possible). While Cruel Criticism is still interpretation prone, the bar for cruel is (imo) more defined than the bar for unfriendly. (1/2)
    – Cesar M StaffMod
    May 4 at 13:45
  • 7
    The standard is not someone claiming it is cruel to them, but rather a reasonable interpretation that it is. Most of the time, elected moderators will be handling these (as they have), some other times, staff members may be. Tldr: while I agree it’s interpretation-prone, it’s less so than before, and has a more defined bar. That said, if you have a suggestion for an alternative language, I’m happy to consider it.
    – Cesar M StaffMod
    May 4 at 13:45
  • 6
    @CesarM You all said in the Mod Team post that you were not going to "fundamentally" change the CoC policies you wrote. "Significant" is a synonym of "fundamental" so Xander's description is accurate. The kind of feedback you all sought in the Mod Team included "questions" the mods had and "tweaks in wording" (i.e. what Xander described as copy-editing). If you are really willing to consider "significant" changes then why the rush to roll out the new CoC?
    – Null
    May 4 at 14:42
  • 1
    @CesarM: I've had a re-read of it, and I think it's all actually covered underneath the "Hostile comments" section. The way I read it, all of those sections are related to each other, and the kinds of comments you want to prevent would be the kind of comments that are openly hostile. So I'd actually advocate for that blurb to be taken out entirely as redundant.
    – Makoto
    May 5 at 19:49
  • 2
    @Makoto that makes sense. We've removed cruel criticism from there and added a "cruel" classifier to hostile comments, it does seem to belong better there.
    – Cesar M StaffMod
    May 12 at 17:52
14

Some of the new additions like the political content and misleading information policies are much more likely to be violated in chat than on the main sites. There are some sites that deal with content like that like Skeptics and Politics, but on most sites political content would simply be off-topic. So what remains is chat where people might talk about topics like this.

Chat moderation is a bit of a mess, all mods have power there but nobody is actually responsible. And if we encounter a complex case that potentially violates e.g. the misleading information policy, we might have to escalate to the CMs when we cannot judge the case ourselves. The complexity of these new rules makes it much more likely that we have to escalate than before. But that mechanism relies on our own sites, while in chat we might be moderating users that don't have accounts on the site where we are a mod. And it's also not visible to other chat mods that something was escalated.

Is there any guidance specifically on how to moderate chat given the new Code of Conduct?

CC BY-SA 4.0
1
  • 5
    Whenever you're unsure, you can always escalate it to us. If they don't have a profile on the site you're a mod on, you can also email us, ping us, or ping a mod on a site they do to raise it there. We're always happy to help.
    – Bella_Blue StaffMod
    May 3 at 18:14
12

Who is meant to be enforcing this? The "Our expectations for users" section says that "if you encounter something that you believe is harmful, please flag it for moderator attention", which implies the moderators are the first line of handling. However, the "misleading information" section says that "we do not allow any content that promotes false, harmful, or misleading information". Are moderators expected to enforce this? If so, is it now expected that moderators are subject matter experts in the sites they moderate? That hasn't previously been a requirement, but without subject matter expertise, I'm not sure a moderator can enforce this policy. In fact, we have a decline reason for flags about inaccuracies and wrong answers.

In the "Unacceptable behavior" section, there could be some redundancy between "misleading information" and "political content". It's not fully clear to me if the "for the purpose of promoting the interests of a political party, government, or ideology" is regarding all content the promotes those interests (if so, it should be in the political content section) or specifically refers to content that "promotes false, harmful, or misleading information" (if so, it's redundant, since all content that promotes such information is prohibited).

In the "Unacceptable behavior" section, the description of "Sensitive content and imagery" is unnecessarily verbose. Suicidal and self-injurious behaviors are harmful behaviors, so the first two sentences are repetitive as they are about promoting or encouraging or providing instruction for harming oneself or others.

In the "Political content" section, the hyperlink says to "Read more in our Political Speech policy". The section is then later called the "Political content policy". Please review to ensure that all references to other sections use the correct names.

Why is the Political content policy formatted differently than the other sections? I find the brief introductory paragraph followed by a small number of bullet points to be easy to read and consume. However, this is a small wall of text. I would recommend reformatting for consistency and readability.

The "Misleading information policy" has some content that is too specific. For example, why specifically call out widely disproven claims regarding health? Any promotion of disproven claims should be prohibited by a misleading information policy. A single bullet point can be used to prohibit this content and give examples of health, historical events, and election fraud, if specific examples are deemed to be necessary.

In the "Abusive behavior policy", "Hostile comments" should be expanded to include "another person or group". To reduce verbosity, it's likely that "hostile comments" is unnecessary and is well covered by the other categories or could very easily be rolled into the other categories by moving a few words.

The "Disruptive use of tooling" policy is extremely verbose. The whole paragraph under "Targeted Voting" is extraneous information. You can remove the concept of "non-organic" and end up with a much smaller, cleaner list that makes it clear that voting is about the content and not people or topics.

In the "Sensitive content and imagery policy", why is "non-consensual imagery" limited to nude and sexually suggestive imagery? With that qualifier, I don't see how it becomes different than sexually explicit material.

In the "Sensitive content and imagery policy", moving "self-harm" to its own bullet adds unnecessary verbosity. It could very easily be combined with "content glorifying harm".

In the "Self-harm and suicide" section, why do you link to the 988 Suicide & Crisis Lifeline? Is this available outside the United States? I believe the link to Suicide.org is sufficient and the reduced verbosity makes it easier to consume.

The style of hyperlinking is quite verbose, and even annoying. Why did you choose to use the "Read more on XYZ" style? For example, instead of saying something like "Read more on how to ask a good question", make "Asking" a hyperlink to more details on asking. When you do use keywords to make hyperlinks, sometimes they aren't quite right, such as making "minimum quality" a hyperlink, when it should probably be "minimum quality standard" or "quality standard", since the link brings you to information about the standard.

I'd recommend running this though a tool that calculated readability scores. I did this for the current version, and it has a Flesch-Kincaid Grade Level of 13 and a Gunning Fog Index of 15.6. These high scores indicate that it may not be the most accessible for people who are not native English speakers. It does get a little bit better if you analyze it section-by-section, but the first line section has a Flesch-Kincaid Grade Level of 11.3 and a Gunning Fog Index of 13 - still a bit on the high side for non-native English speakers, and potentially fatiguing for English speakers to read. Write this for the users, and not the lawyers - the second paragraph in the Abusive behavior policy is a good example of this.

CC BY-SA 4.0
3
  • 4
    Flesch–Kincaid Grade Level ("30.0–10.0. College graduate. Very difficult to read. Best understood by university graduates."). Gunning fog index ("13. College freshman"). May 5 at 15:00
  • 2
    From what I understand mods are supposed to escalate misinformation cases they cannot judge themselves. There is also likely a lot of inherent selection here, those cases will much more likely arise on sites like Politics, History or Skeptics where the mods are more likely to be able to handle many of them themselves. May 5 at 21:27
  • "is it now expected that moderators are subject matter experts in the sites they moderate?" - largely speaking, what is described in that section sounds largely separate from most sites' subject-matter to me. It's about misinformation about harm, health, political candidates, voting, historical facts, etc. Though I have found similar potential for confusion, which I wrote about in my answer here
    – starball
    May 5 at 22:43
12

I have concerns about the "Misleading information policy" and how to adjudicate the border line between "wrong" answers and "misleading" answers. My concern is both substantive (what is the definition of the difference) and procedural (who exactly is empowered to adjudicate misinformation vs. crap and what guidelines do they use)?

Historically, we have had a very powerful tool on the network to combat misinformation, the downvote. Users who post too many downvote-gathering answers face being banned from answering any more questions. Diamond moderators have historically stayed away from the determination of "Truth", instead relying on the community to differentiate high-quality answers from the detritus of naive, misinformed, poorly-sourced, disorganized, and/or just plain bad answers. The overwhelming guidance given to moderators thus far has been to not take preemptive action on answers felt to be "wrong", but only take action against non-answers such as spam, hate speech, patent nonsense (e.g. "apoaspogpergaeprg hi hi hi"), new questions, and commentary (e.g. "Did you ever find a solution to this problem?"). These removal reasons are covered by our existing Spam, Rude, and Not An Answer flags and would not be covered by a hypothetical Misinformation flag.

We even have a standard flag decline reason that moderators use to remind flaggers that moderators do not take action against answers on the basis of them being wrong:

Declined - Flags should not be used to indicate technical inaccuracies, or an altogether wrong answer.

Can I assume that, with the new CoC, this flag decline reason will be going away and/or being replaced with something that acknowledges that moderators will now be handling some wrong answer flags?

Are we going to have a "Misinformation", "Misleading", or "Conspiracy" flag that users can raise on answers and have them evaluated for Truth by moderators?

For example, such a flag might look like one of these:

Misleading: This post answers the question, but it contains content that is unsupported or widely disproved. It is harmful to public health or democratic institutions, and might need to be removed.

Conspiracy: This post answers the question, but relies on widely discredited conspiracy theory content such as QAnon, Chemtrails, Flat Earth, Satanic Ritual Abuse, or 9/11 False Flag Operations or is otherwise harmful to public health or democratic institutions. It violates our Misleading Information policy and should be removed.

How exactly should moderators be determining if an Answer brought to their attention is Misinformation that they should take action on right away or simply a Wrong answer to be left to the community to downvote into oblivion? Does it depend on the poster's intent (e.g. posting vaccine Autism nonsense because they don't know any better vs posting vaccine Autism nonsense as part of a calculated campaign of fraud)? Does it depend on how "obvious" the false or unsupported statements are? Can "wrong" answers that require specialized knowledge to recognize as false (e.g. that dereferencing a null pointer in C is defined behavior) ever be considered misinformation or should they always be considered "just wrong" and downvoted?

What constitutes Misinformation vs Incorrect information has varied and continues to vary. For example, at various times in the past few years and according to various authorities, the idea that SARS-CoV-2 originated in a lab has been treated anywhere as a likely and supported idea, a doubtful but reasonable hypothesis, to absolute misinformation. Do moderators have the skills and discernment to adjudicate all of this?

Stepping back for a moment, do we even want moderators to become arbiters of Truth?

I do want to say that I "get" that the misinformation rule is designed to combat things like QAnon, Satanic Child Abuse, and Freemasons-Conquering-The-World conspiracy postings and not posts from ignorant high schoolers who are shaky on C sequence points and exactly what constitutes undefined behavior, but I worry greatly about how this is going to play out in practice where the boundaries between conspiracy and wrong is unclear or opinion-based.

In response to a comment by Starball, is there a difference between someone posting an answer on Stack Overflow that is vulnerable to a known exploit (harmful if a reader uses the code in a production system) and posting an answer on Politics.SE claiming that Joe Biden rigged the 2020 US POTUS election (harmful to democracy)? Do we give the first one a pass because moderators aren't expected to be experts in every known exploit or haxoring technique, but come down hard on the second one because clear and convincing evidence of the legitimacy of Biden's election is easy to find and widely accepted among non-experts? Do we give the first one a pass because it is non-political?

I'm especially concerned how we are going to proceed with Truth adjudication when moderators are not required to be subject matter experts. Are we going to have new policies requiring moderators to prove subject matter expertise in their site's scope (e.g. by sitting some sort of content exam or submitting academic transcripts or professional licenses or certifications), or are we going to introduce a new subject matter expert role? For example, will Medical Sciences.SE need to hire a panel of physicians and public health experts to adjudicate which answers are harmful to public health and subject to immediate deletion under the Misleading Information Policy and which are just crappy answers that can be handled with downvotes? Will we have a rule that only licensed pilots, aircraft mechanics, and air traffic controllers may become or remain diamond moderators on Aviation.SE in order to ensure that moderators will be able to differentiate dangerous misinformation from just plain crap? Will answers on Parenting.SE need to be screened by pediatricians, child psychologists, or Child Protection Service (CPS) officers who will be empowered to preemptively delete content they think could be harmful to children if followed?

In response to a comment by Fattie, I do see something similar. Viewpoints do not become Misinformation because they are wrong, unsupported, or even potentially dangerous in the hands of the foolish or ignorant, they are Misinformation because they are dangerous to those in power. For example, QAnon directly challenges the authority of Joe Biden and he therefore has an interest in finding ways to suppress it in order to bolster his position. Similarly, vaccine denialism is dangerous to Big Pharma and the security of their revenue streams. Now, I don't personally believe in QAnon or vaccine denialism, but I do recognize that they are being slammed as misinformation precisely because they threaten those in power and not because they are wrong or unsupported. So, I would advise that were consider whom we are protecting when we identify, flag, and remove "misinformation" from our sites, and whether those parties deserve our protection.

Also keep in mind that even true information can be "misinformation" when it challenges those in power. It wasn't too many decades ago that Big Tobacco pooh-poohed and suppressed scientific research showing that smoking was harmful, vigorously asserting that it was unsupported and misleading. It wasn't whether smoking was harmful to smokers, but whether publishing allegations of harm was harmful to profits. And it was!

CC BY-SA 4.0
9
  • 9
    "Historically, we have had a very powerful tool on the network to combat misinformation, the downvote." I disagree here. The downvotes do not combat misinformation, they combat low quality posts. The distinction is that there are plenty posts that are wrong but have positive score. Often quite a large score compared to actually correct answers. Just because the incorrect answer may have have better presentation. To truly get into answer-ban territory, a user has to post bad-looking (and potentially wrong) answers. E.g., a simple code dump.
    – VLAZ
    May 9 at 11:41
  • 5
    On SO anything that could violate the misinformation rule would be off-topic or not-an-answer. So this is a rule that will likely affect a few sites like Politics, History and Skeptics and chat, not the majority of technical sites. And to me the rule doesn't really apply to grey areas, it is mostly useful to handle well-known misinformation. The lab leak issue has a large grey core where we can't know what happened, but there are also versions of this that are outside that core e.g. "COVID was a Chinese bioweapon for sure" May 9 at 12:18
  • 5
    Just found a decent example of where downvotes have no power: this answer about sorting on string property - the answer does not show how to sort strings but how to sort numbers. This is vital distinction because by doing so, it leads the same trap as in the question How to sort strings in JavaScript where sorting strings as a number produces wrong results. So, the answer is wrong and misleading. The score is +79/-5 and thus there is no real way for it to be removed. We have lost the combat. IMO, and the war.
    – VLAZ
    May 9 at 15:02
  • 3
    "I "get" that the misinformation rule is designed to combat things like..." - i.e., a very few specific worldview that the US government wants to suppress, cannot legally do so directly due to the First Amendment, and wants corporations to help out with. (The validity, plausibility or reasonableness of such worldviews doesn't enter into it.) Section 230 was supposed to enable companies to ease off on censorship, but in practice it has empowered them to do far more instead. May 10 at 23:06
  • 2
    @MadScientist "The lab leak issue has a large grey core where we can't know what happened, but there are also versions of this that are outside that core"... and if someone expresses a viewpoint outside of that "core", why does it need to be treated as an offense against the community, rather than just an unjustified take or low quality? Further, why do apparently very specific topics require this treatment, whereas e.g. "country X recognizes Taiwan as a country separate from China" (for factually incorrect values of X) seems like it wouldn't? May 10 at 23:09
  • 4
    On SO there are any number of "famously incredibly, spectacularly, wrong" answers that are ticked and continue to get vast numbers of upvotes. This is a well-known, elephant-in-the-room, humorous facet of SO that everyone in the world knows about and SO owners pretend doesn't exist. Of every 10,000 SO voters, 9500 have embarrassingly, incredibly, low technical knowledge of software engineering; almost all voting is either momentum voting or voting based on tidy formatting and good spelling in the answer. Call a spade a spade, any "misleading info" clause is to allow deletion of ...
    – Fattie
    May 11 at 14:28
  • 3
    ... material that is currently politically declassé, notably "vax denier" and "election denier" babble in the USA milieu. Again, they should just call a spade a spade. Instead of trying to pretend, technocrat-style, that "someone" has platonic knowledge, the guideline in question should be called something like the "outside acceptable norms" item.
    – Fattie
    May 11 at 14:31
  • "but come down hard on the second one because clear and convincing evidence of the legitimacy of Biden's election is easy to find and widely accepted among non-experts" What's funny is there is equally easy to find clear and convincing evidence of the opposite conclusion. This issue should not even be attempted to be moderated. It should be up to the reader to decide on. The site should work by allowing people to present the facts and make their case, not suppress particular opinions.
    – jpmc26
    17 hours ago
  • 1
    No idea where you live, but "they are being slammed as misinformation precisely because they threaten those in power and not because they are wrong or unsupported" is itself misinformation. People died because of QAnon and antivax conspiracies, because those conspiracies were and are wrong, and unsupported by evidence, and that is why reasonable people opposed proliferation of those ideas. The people who died were frequently not those in power, but among the most vulnerable in society, and it's added harm to deny that injury is and was prevented by reducing the spread of those ideas.
    – Nij
    4 hours ago
9

I actually like this a little better than the original version of this though there's a few bits of feedback I'd still have.

Upcoming regulatory pressures from Brazil, the EU, and elsewhere demand that our content moderation practices are able to stand up to scrutiny. We do not believe that our current code delivers on those requirements.

In a sense - this might be somewhat problematic. Certain countries or even states seem to be hurling themselves headlong onto policies that are entirely orthogonal to what SE intends to do. In addition, we moderate communities, not content mostly. I'd love to be wrong but this aspect still feels potentially troublesome to me.

We have outlined below some expectations that are generally true across the network; some sites may have stricter requirements or use different policies for questions/answers/comments. Please adhere to individual site policies where they differ from these expectations.

I like this - but in a document that aims to list out explicitly what the expectations are I feel that "individual site policies as per the site meta sites" might be more precise and give a hat tip to that as a place to look for policies.

On that note - might I suggest that the 'addendums'/links are hosted on meta (under an announcements lock), as is a copy of the COC - if there's changes, it provides a very organic way to keep track of them.

Also on the third run of reading through it - and wizzwizz4's answer I feel like the underlying tension finally dawned on me. We often joke about meta being case law. There's two Western traditions of law - the English one is closer to what we do, and the Napoleonic/Roman system on the continent relies more laws being specific and explicit. While one of the goals is to have more detailed rules for people to refer to- it’s worth remembering that part of a moderator's role is to handle exceptions, and where something's not covered by the rules explicitly, the trust and support for them to handle it in the way they deem best for the community shouldn't be weakened in any way.

CC BY-SA 4.0
10
  • 2
    "In addition, we moderate communities, not content mostly" - wait what? I'd always thought it was either more content moderation (for elected site mods and not Community Moderators), or a mix of both.
    – starball
    May 4 at 0:20
  • @starball Content moderation is for you lot. Except in very early public beta, and things that the community has delegated to moderators, mods mostly stay well clear. See A Theory of Moderation.
    – wizzwizz4
    May 4 at 0:46
  • 1
    @wizzwizz4 I've seen diamond mods delete plenty of non-answers on SO though. Doesn't that qualify as content moderation? And doesn't comment flag handling count as content moderation too?
    – starball
    May 4 at 0:49
  • 2
    More of "what do we do when we see child porn, anything more than the most casual of trolling and the most blatant of spam" . Its a matter of focus, and what's my core goals as a moderator. I want a healthy community and my moderation is focused on that as opposed to dealing with content.
    – Journeyman Geek Mod
    May 4 at 0:54
  • 3
    Some countries are hurling straight into censorship and tyranny. Misleading information is extremely broad and can be easily abused. While I am inclined to think that SE will not start broadly censoring content and answers, this is rather slippery slope. May 6 at 10:20
  • 1
    @ResistanceIsFutile We don't need a policy document for the de facto moderation policy to be "censorship and tyranny". If issues occur in practice, you've still got meta (and, failing that, you can contact a moderator: there's a policy document saying moderators are allowed to raise complaints).
    – wizzwizz4
    May 6 at 22:07
  • 2
    @wizzwizz4 Besides few incidents that happened during 2019 kerfuffle, which were understandable under the circumstances, I never encountered situation where mods resorted to censorship and even less tyranny. I am not worried about them, I am more worried about opening the doors to government censorship and tyranny. I am not saying that they can do much about it if such situations happen, but there is a huge difference between applying the minimum needed to satisfy such requests and actively enforcing them. May 7 at 6:15
  • 2
    @ResistanceIsFutile That seems like a separate issue, and I don't think it's one we need to be terribly concerned about. Stack Overflow has a long and valued history of political activism 1 2 3, and I doubt they'd stand for that.
    – wizzwizz4
    May 14 at 12:52
  • @wizzwizz4 I sure hope so. Anyway, this is not much of a concern on technical sites, but there are some sites in SE network where covered topics and discussions may be of interest for government censorship. EU is already actively restricting access to some content (for instance Russian state related news sites and social media channels) and it is not unimaginable scenario that such practice can spread to other areas and topics. May 14 at 13:40
  • 1
    @ResistanceIsFutile Well, you know what they say. Be gay, do crimes, and don't talk about them on the internet. You're not going to get an official response from Stack Exchange on this matter; and of course, Stack Overflow Inc. will be nothing but cooperative, it's just do you realise how hard it is to remove things from a community-moderated site without permanent deletion, without invoking the Streissand effect? So of course it's going to take some time to respond to a request…
    – wizzwizz4
    May 14 at 13:43
7

This current version reads a lot nicer and seems more succinct and neutral than former iterations. Thank you for that.

Some questions and notes, in no particular order, but numbered for the sake of readability and ease of commenting.

  1. [..] have spent hundreds of hours crafting this document to alleviate pain points we have found with our current Code.

    Can we get some insight into that process? What did the team responsible for this CoC consider pain points?

  2. Why, under Self-harm and suicide, does this CoC only mention American helplines? Aren't there international organizations with similar functions?

  3. Under Sensitive content and imagery, why does imagery that induces or glorifies harm get all the attention? Has this been a particularly common and disruptive usage?

  4. We do not allow any content that promotes false, harmful, or misleading information [..]

    Unless, I take it, it was part of an answer written with the best intentions? If that assumption is correct, could you add the word "intentional" there somewhere? Will this be monitored, or is it exclsuively up other users to flag such content? Does this otherwise mean answers will get censored (instead of getting edited after misinformation has been pointed out)?

  5. The Misleading information policy is also referred to as Misinformation policy (similarly to what Thomas Owens points out in their answer about Political speech policy and Political content policy).

  6. As has been pointed out in other answers, if this is to be considered "a handshake agreement between users and the company", the responsibilities of the company are blatantly absent. If users on this platform agree to the CoC, can they e.g. expect fair and transparent reciprocal behaviour? Where is the "Our promises to you" to mirror the "Our expectations for users"? (I'd like to direct your attention to einpoklum's answer for a clearer case.)

CC BY-SA 4.0
3
  • 4
    RE point 4: We don't allow content that promotes false, harmful, or misleading information. We do allow people who post such things to continue to participate, if it wasn't on purpose and they'll try not to do it again. An "intentional" qualifier would be wrong. (Perhaps another part of the CoC needs changing, to clarify this?) As Shog9's answer points out, editing is one way that the promotion of false/harmful/misleading information is eliminated.
    – wizzwizz4
    May 8 at 19:18
  • 5
    "the responsibilities of the company are blatantly absent" the CoC applies to everyone (including company members) engaging on the SE network, no? I would assume so. Also, the company has additional responsibilities in the reporting section if you contact them to appeal a mod decision.
    – starball
    May 8 at 20:47
  • 1
    @wizzwizz4 the elephant in the room being the discussion around what is "harmful" in a meaningful sense. May 11 at 10:02
7
  • Under "bullying and harassment", what does it mean to "invalidate" "a person's individual experiences in a manner that causes harm"? What is "harm"?

  • Under "dangerous speech", what is "rhetoric that increases the risk of violence being condoned or committed against a particular group"? A discussion of crime can be construed to increase vigilantism (violence against "criminals"), even if the contributor explicitly calls for acting within the law. Never mind—legal police work may count as "violence against criminals".

  • Under "bigotry and discrimination", the list of characteristics is phrased in a way that implies exhaustiveness. Why doesn't it include sex (the most glaring omission by far), national origin, or economic background?

  • Under "extremism", what are "hateful organizations"? (It's ORed with other clauses, so there isn't any definition of "hateful".)

  • Under "hateful imagery", sex, national origin, and economic background are not listed.

  • Under "mocking content", what is "in a manner that could be reasonably interpreted as causing harm"? What harm?

  • Under "Political content", para. 2, sex, national origin, and economic background are once again not listed.

Under "self-harm", it is defined as "suicidal and self-injurious behaviors". It stands to reason that "harm" (sans "self-") is "murderous and injurious behaviors", which is rather hard to effect over TCP/IP.

The other, implied, definition of harm is "everything we prohibit, because things we prohibit are by definition harmful, else we wouldn't prohibit them". It's circular, exploitative, and unhelpful.

CC BY-SA 4.0
1
6

You objectify people -- saying that users have an ("actual") ethnicity etc:

To ensure that all users feel safe and welcome, we do not allow behaviors or content that cause or contribute to an atmosphere that excludes, marginalizes, or dehumanizes individuals or communities on the basis of their actual or perceived ethnicity, age, sexual orientation, gender identity and expression, disability, or held religious beliefs.

Instead, better to list these as types of abuse or of discrimination, rather than types or categories of people or of user.

The current CoC mentions types of offensive language, that's better than the new text:

No bigotry. We don’t tolerate any language likely to offend or alienate people based on race, gender, sexual orientation, or religion — and those are just a few examples.

The most useful line in the current CoC was this -- it let me moderate any or all personal comments:

No name-calling or personal attacks. Focus on the content, not the person.

CC BY-SA 4.0
8
  • 1
    As far as I understand, (most?) people do have an actual ethnicity. From Wikipedia: "Ethnicity may be construed as an inherited or as a societally imposed construct. Ethnic membership tends to be defined by a shared cultural heritage, ancestry, origin myth, history, homeland, language, dialect, religion, mythology, folklore, ritual, cuisine, dressing style, art, or physical appearance. Ethnic groups may share a narrow or broad spectrum of genetic ancestry, depending on group identification, with many groups having mixed genetic ancestry."
    – wizzwizz4
    May 13 at 14:10
  • 1
    @wizzwizz4 It's an "identity-view" i.e. a view about somebody's identity. It's a topic that's unsatisfactory and which may be (or "is always") associated with suffering. Even if you viewed me as having an ethnicity, I didn't come here to be told that, and IMO it would be wrong of me to assume that you have one.
    – ChrisW
    May 14 at 11:21
  • We could make the same point about gender, disability, and held religious beliefs. (I don't know how to apply the argument to sexual orientation, and it doesn't apply to age.)
    – wizzwizz4
    May 14 at 11:51
  • @wizzwizz4 it doesn't apply to age I began cycling 12 years ago to commute -- an hour or two a day means 50,000 km or so since then and I feel younger now than when I started (big surprise, turning back the clock like that). My birth date on my passport hasn't changed, but still -- given that "age" is one of the "grounds of discrimination" I think it's normal or correct (e.g. when hiring in Ontario) to view that as a taboo suject or private. And similarly I think that the CoC should warn that it''s a potentially unwelcome topic or language -- instead of saying that people are aged.
    – ChrisW
    May 14 at 12:14
  • Would removing "their" from "on the basis of their" resolve this issue?
    – wizzwizz4
    May 14 at 12:31
  • 1
    Would removing "their" from "on the basis of their" resolve this issue? Perhaps, in theory. The text has other problems which make it distasteful to me, graceless and bossy and repulsive. But this was one of the problems, i.e. that it seems to have been written by authors who bought into the premise that these are real. Do I want to have an actual or perceived ethnicity while I'm here? No thank you. Do I want SE to tell me that I must? Again, no. I already quoted what I thought was better text -- including "We don’t tolerate any language etc.", and "Focus on the content, not the person."
    – ChrisW
    May 14 at 12:46
  • 1
    Do I want to have an actual or perceived ethnicity while I'm here? I phrased this as an "I message" but that (i.e. what I want) is not my point. My point is that your text confirms the idea that people have ethnicity etc which is (potentially) something to fight about. IMO the difference matters -- it's a "view", a type of dispute, or of language, or discrimination, or personal insult. It's a commonly-held view, a theory, it's conventional -- but don't say it's a thing! Is "Focus on the content, not the person." no longer policy? This text literally refers to "personal" characteristics.
    – ChrisW
    May 16 at 2:31
5

For the "Our expectations for users" section:

Voting - Our voting system is central to how Stack Exchange works. Votes are how the Community signals great content and rewards its members for their contributions. Improperly cast votes undermine the integrity of the platform. Read more on how users are expected to use the voting system.

The linked Help Center page says very little about how users are expected to vote. It just says at the bottom:

Voting up a question or answer signals to the rest of the community that a post is interesting, well-researched, and useful, while voting down a post signals the opposite: that the post contains wrong information, is poorly researched, or fails to communicate information.

And it doesn't say anything about fraudulent (Ex. sock-puppet voting on self) or serial voting, which really if you're going to talk about how we're expected to vote, should be part of the document or linked resource. Please fix that. I suppose updating the Help Center page would work.

Actually, why not just add a link in that bullet point to the "Disruptive use of tooling policy" and "Inauthentic usage policy" sections, which do cover various bad voting things?

Also, as far as my understanding goes, people are free to vote in whatever way they want as long as it's not fraudulent or serial, and it's just recommended (in the vote tooltips and Help Center) that votes be used to indicate usefulness. So what's up with saying that we expect people to vote in particular ways? That seems to go against my general understanding (aside from the fraudulent and serial voting part).

CC BY-SA 4.0
7
  • 5
    The expectation of what votes are for has always existed de jure. The de facto acknowledgement that votes are anonymous and unenforceable beyond clear patterns of abuse doesn't change the expectation. Just like exceeding a speed limit by 1-2 units is not within the expectations set by law, but tolerated to the extent we don't use GPS trackers 24/7 to monitor every single vehicle.
    – Nij
    May 3 at 23:03
  • 3
    As a note, there is some official guidance around targeted/serial voting on this Help Center page: Why do I have a reputation change on my reputation page that says "voting corrected"? And there's a section on when not to upvote in the Help Center page for the "vote up" privilege as well. (We also have a mod-only Help Center page with more guidance on identifying and handling vote fraud, but obviously that's for mods' eyes only.)
    – V2Blast StaffMod
    May 4 at 19:52
  • 3
    I'd also argue that, despite it sounding pedantic, that there is a key difference between "how users are expected to use the voting system" (as noted in the CoC) and "how users are expected to vote". Users are expected to use the voting system authentically, to rank content in the way they see fit. How users are expected to vote on each and every piece of content is intentionally left to the voter, and only subject to broad guidelines. I feel like this distinction is intentional.
    – zcoop98
    May 4 at 23:21
  • 1
    "Expectations" like this simply don't belong in the same document as the one that tells people what's unacceptable and discipline-worthy. May 4 at 23:58
  • 5
    Uhm.. yeah, they do. Expectations of conduct belong exactly in a code of conduct. That's literally the point, identifying what is and is not acceptable behaviour.
    – Nij
    May 5 at 8:23
  • 1
    While users are largely left to vote according to their own desires, there are some cases where we step in and may invalidate votes, like a user downvoting all content in a specific tag because they don’t like the tag (or the product the tag is about). That said, I think we can better clarify that point with the sentence we added at the end. Please check it out
    – Cesar M StaffMod
    May 12 at 17:49
  • 1
    @CesarM nice. Would you consider saying "Please also see the" instead of "Please see the"? Since it comes right after another instruction on something to read: "Read more on how users are expected to [...]".
    – starball
    May 12 at 20:34
5

Your sentence structure is completely overboard. Look at this:

To ensure that all users feel safe and welcome, we do not allow behaviors or content that cause or contribute to an atmosphere that excludes, marginalizes, or dehumanizes individuals or communities on the basis of their actual or perceived ethnicity, age, sexual orientation, gender identity and expression, disability, or held religious beliefs.

Parse that:

  • To ensure that all users feel
  • (safe and welcome),
  • we do not allow
  • (behaviors or content)
  • that
  • (cause or contribute)
  • to an atmosphere that
  • (excludes, marginalizes, or dehumanizes)
  • (individuals or communities)
  • on the basis of their
  • (actual or perceived)
  • (ethnicity, age, sexual orientation,
    • (gender identity and expression),
  • disability, or held religious beliefs).

I mean, dang!!

50 words in one sentence, from an archetypal "wall of text". It's horrible to read.

Compare with the current CoC, which is 20 words, more readable, more human.

No bigotry. We don’t tolerate any language likely to offend or alienate people based on race, gender, sexual orientation, or religion — and those are just a few examples.


I write too much myself, in the workplace—and people complain, especially people for whom reading English is chore, because it's a second language or because they have other things to do—I try to edit to make my text concise.

Editing to make your text concise, readable, friendly, isn't on your agenda, is it?


And for the record, a teacher maybe encourages good behaviour instead of criticizing bad. This CoC is all about negatives, lists and lists of bad stuff. Even the opening is harsh—the imperative mood:

No matter where you engage on the network with your peers, we expect all users to treat one another with kindness and respect.

How about request, encourage, ask, or even just "need"? This CoC is rude! And bossy.

CC BY-SA 4.0
7
  • 3
    I don't see "No matter where you engage on the network with your peers, we expect all users to treat one another with kindness and respect." as harsh. See my answer post's section on optics. For the people who see Stack Overflow/Exchange painted in a very negative light, that's the kind of statement that I expect to nudge the course of their thinking. "Request", "encourage", and "ask" are too weak of words. When I see the word "expect" with "kindness and respect", that makes me think "thank goodness".
    – starball
    May 13 at 9:21
  • 1
    @starball More truthfully I guess you "expect" a few users to behave badly -- which is why you're writing this whole wall of text. What you're saying is that you "require" users to behave, and implicitly threaten them with expulsion if they don't. And that may be true -- but it's rude, harsh -- you wouldn't talk like that to a newly-arrived guest even when those are your house-rules. There are politer ways to say it. This "mood" is imperative. Instead, say like "we need" to explain yourself; or "please", to request kindly; or exhort/inform users too with descriptions of recommended behaviours.
    – ChrisW
    May 13 at 9:33
  • 2
    maybe it's just a personal/cultural/upbringing thing. to me, rules are rules, and shouldn't be sugar-coated. If we want to tell users what good behaviours are, we already have the tour and help center. And my impression from reading Code of Conduct statements for some big software repositories is that this format of laying out do-nots is fairly normal.
    – starball
    May 13 at 9:35
  • 1
    There's "speak softly and carry a big stick", a bit of American. And "Be nice", which was an old SO motto. There's more a detailed explanation in I Want You To Be Nice which I assume is good advice for moderators -- "Be nice. If someone gets in your face, I want you to be nice. Ask him to walk -- be nice. If he won't walk, walk him -- but be nice. If you can't walk him, one of the others will help you, and you'll both be nice."
    – ChrisW
    May 13 at 10:02
  • 1
    And my impression from reading Code of Conduct statements for some big software repositories is that this format of laying out do-nots is fairly normal. That is true. But this used to be "a community" for some reason. And IMO that reason was the way that SE people (founders and moderators) wrote -- there's nothing else -- tone and content; both, in conversation, and baked into the Help etc.
    – ChrisW
    May 13 at 10:07
  • Re "I write too much myself": It could be self reference, but dang! 40 words in that (essentially) one sentence (run-on'ish) May 13 at 14:13
  • @This_is_NOT_a_forum Try both sentences in online-utility.org/english/readability_test_and_improve.jsp -- one of them is off the chart.
    – ChrisW
    May 13 at 16:11
5

My feedback amounts to:

  • 1 concern;
  • 1 disappointment;
  • 1 note of appreciation;
  • 1 note of acquiescence to the circumstances that be;
  • and 1 warning.

My main concern to these changes would be over people trying to frame the code of conduct as their personal shield against curation. As the pages were extended to include phrases such as "repeated, or persistent unsolicited conduct, misuse of power or tools, or attacks that target specific users or groups of people in a manner that causes harm", it will be a matter of time before they are inappropriately directed towards curators. This answer already extends on this concern rather well. Still, I will stay optimistic and say that fake shields can only last for so long. As experienced back in 2018, maybe a surge of references to the code of conduct are bound to appear, but that will likely deteriorate over time, just like before.

I am also disappointed in not bringing back "assume good intentions" (or "presume good intent", which is also a great rephrase). I feel that this attitude alone could prevent many escalations happening in the network.

In any case, I appreciate that the code of conduct continues to be updated over time to face the fact that society norms are far from static, and that the proposed changes are made transparent¹, even if just to appease some international entities and/or stakeholders of the company. Observing the new corpus in the perspective of a Stack Overflow user in particular, I do not really envision this as something that will significantly affect how this site is moderated. If I am wrong about this, then I am either oblivious of certain moderation edge cases, or the changes will turn out to be regretful.


¹ It is also no surprise that the ghost of Monica Cellio appears whenever a change of code of conduct is involved. That incident was poorly resolved, and Stack Overflow will need to continue taking efforts to regain trust in this process. It would help to make a very careful assessment when moderating answers in this question, as any moderation actions applied to answers (such as deleting them) are going to be perceived more strongly than in other contexts.

CC BY-SA 4.0
2

There are currently moderator actions against ChatGPT usage and other autogenerations. If there are moderator actions, shouldn't the CoC back the moderators up a bit better? I have a feeling that this is in there someplace, but I'm not sure where.

Along the same lines, I don't believe the location of copyright issues feels like it gets the prominence it deserves in the "inauthentic usage" section. In fact, to me it feels sort of hidden. When I read the first few items in the section, it just doesn't feel like if I read on that I would encounter anything having to do with copyright, but there it is (anecdotally, my original intent was to say "copyright issues are missing", but I decided on a more thorough read and a CTRL-F for "copyright" before clicking "Post")

I would go so far as to recommend changing the name of the section to "Copyright violation and other inauthentic usage" and maybe place the copyright issues first. It should probably also discuss AI generated content in this context.

CC BY-SA 4.0
4
0

Under Site Policies, please reiterate that unacceptable behavior is unacceptable across the entire platform.

The beginning of the Unacceptable Behavior section it is said that you "have outlined key forms of unacceptable behavior across the entire platform," it's much better to be clear, emphasize, and repeat that what's considered unacceptable behavior (abuse, misinformation, bigotry) is not subject to change based on each site's policy.

A proposed rewording:

This Code of Conduct is meant to work alongside individual site policies. Sites and Chatrooms may choose more restrictive policies for their content than what is allowed here, particularly around what is on-topic or off-topic. However, the unacceptable behavior outlined above is not subject to change and should be considered a constant across the platform.

CC BY-SA 4.0
2
  • 8
    Is the reiteration really necessary? What exactly do we gain by restating the obvious that "more restrictive" means that the baseline ones must be upheld? May 3 at 18:40
  • 1
    I feel like this would be better addressed by treating topicality, question/answer quality etc. as out of scope for the code of conduct. May 3 at 23:31
0

Consider adding language about what this does not cover. One is the deluge of low quality questions and actual spam from the same source(s). I don't see where this code is presented when a question is asked. Perhaps it should be, along with examples of the same question(s) that are asked every day/week and are routinely closed.

CC BY-SA 4.0
1
-3

The proposed CoC is so huge and unwieldy and convoluted that it can hardly be plainly understood.

One of the basic principles of justice is that the rules must be made known. A necessary ingredient of that is having them in a form that is accessible to the people supposed to be subject to them without them having to employ a lawyer.

I don't know what new criterion the management thinks would not be fulfilled by the existing CoC, but would be satisfied by the proposal. And as that's apparently the criterion against which the proposal is being drafted, that should be made readily known as well.

'Codes' like this are too much like playing a nasty game of gladiators with the people supposed to be subject to them. The rules' authors are trying to entangle us in the net so they can then stick us with the trident.

It would be better by far to scrap it all and replace it with something plain and simple that makes good sense and justice.

CC BY-SA 4.0
New contributor
terry-s is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
0
-6

Do you really want feedback, or just affirmation and suggestion for tweaks to something that many of us will find objectionable to its core? The proposed CoC is coming across as something that is being used as a vehicle to insert the very kind of agitprop that those of us who have spent any time living behind the Iron Curtain thought and assumed had left behind once and for all.

My feedback is simple: leave it alone and stop imposing additional restrictions.

Yes, Cyberspace is a world forum, and has always been conceived of as a space where we may freely express ourselves, from its very beginnings. And, yes, this comes with positives and negatives; but those of us who have been here for decades have long since learned to live with this freedom - on both the sending and receiving end and expect those of you newcomers (and "newcomers" includes the entire Stack Exchange network) to abide by the standards we, who came before you, established.

We're the Rome of the expression "when in Rome, do as the Romans do".

The best, and most consistent, way to deal with the underlying issues you're trying to address is to fall back to fundamentals - namely, the articles that comprise the Universal Declaration Of Human Rights (UDHR).

https://www.un.org/en/about-us/universal-declaration-of-human-rights

Adopt this as your Code of Conduct, if you must have one. Nothing more, nothing less.

This is a foundation charter of the United Nations, notwithstanding whether or not it has (as yet) been accepted by all of its member states. Formal support for it is, for instance, embodied directly in the charter for the African Union.

All nation-states are expected to abide by its clauses. The same must apply to the world forum that makes up Cyberspace - as well as all those on it. In that vein, it bears to remind you of what Article 30 states:

Article 30
Nothing in this Declaration may be interpreted as implying for any State, group or person any right to engage in any activity or to perform any act aimed at the destruction of any of the rights and freedoms set forth herein.

In particular, it is meant to apply not just to public organizations, but to all organizations, be they private or public; in particular, to all organizations that shares the world forum we call Cyberspace.

To the extent that you wish to have cordiality, this is already expressed as Article 1:

Article 1
All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

(and if you desire more "modern" inclusive language, that actually better expresses the intent, you can replace the last word by "camaraderie").

Nothing more needs to be said, beyond that. Neither this forum, nor any other on Cyberspace, shall be subject to the laws of nation-states the violate or supersede the UDHR. Rather, it is the other way around: nation-states, themselves, must be subjected to the UDHR, particularly, if they wish to have a place in this forum. If their laws state to the contrary, then it is their laws that need to change, or else the nation-state should be held in violation of the UDHR, and held liable for their violation.

One of the articles - perhaps the most important of them all - is:

Article 19
Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.

another is:

Article 12
No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

and a third, which also bears pointing out is:

Article 18
Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief, and freedom, either alone or in community with others and in public or private, to manifest his religion or belief in teaching, practice, worship and observance.

If this network cannot stand by these, as well as the other articles that make up the UDHR, then it needs to step aside and make room for one that can, or else be replaced - in its entirety - by one that shall, willing or not.

CC BY-SA 4.0
New contributor
NinjaDarth is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
4
  • 5
    "Freedom of opinion and expression" doesn't necessarily apply here -- xkcd.com/1357
    – ChrisW
    May 14 at 20:00
  • 2
    They're not depriving anyone of rights. Refusing to supply a platform for certain content isn't the same thing as preventing them from publishing the content elsewhere. By the logic in this post, I should be allowed to coerce CNN to give me a primetime interview to express my political views. After all, I have the right to impart information and ideas through any media. If CNN won't grant me the right, they should be replaced by someone who will. 2 days ago
  • 1
    @ChrisW We have human rights and the freedom of science and I expect unconditional compliance. Assume StackExchange would consider the aspect of science applicable, as it binds research strategies, naturally driven by opinions, gaming and expression of such. 16 hours ago
  • 1
    I moderate one of the SE sites about a religion. Its purpose is to let people ask and answer questions about the doctrine and practice. It's intended for beginners or experts. But if someone were to post there saying, "This religion is moronic, I hate it and you're all stupid for reading this", then I guess that post would simply be deleted. If bickering and silly arguments were allowed then the site would be less readable. Similarly if someone were to start posting there about a different religion -- deleted as "off-topic".
    – ChrisW
    10 hours ago
-14

Oftentimes the use of down voting is done out of spite instead of how the question is asked. Even if someone is very verbose and follows the posting guidelines 100%, some people will down vote that person.

If someone down votes a question, the code of conduct should specify the person is required to provide feedback and information about why the posting does not follow the guidelines. There also should be some way to have down voted messages reviewed to either confirm it is a valid down vote or if someone is abusing their privileges.

CC BY-SA 4.0
New contributor
Mr. Coz is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
10
  • 1
    And THIS is what I've been talking about. Thanks for proving my point. Again... We probably don't need more proof, though.
    – VLAZ
    22 hours ago
  • 6
  • 1
  • @E_net4 - At least you made a comment about not adding accountability to down voting. Can you please be a little more verbose and explain why? I'm sure I am not seeing the bigger picture, but it seems to me accountability adds credibility and fairness.
    – Mr. Coz
    3 hours ago
  • 1
    @Mr.Coz also provides a convenient target which can be attacked. And yes, it does happen often enough to be a real concern. From revenge votes, to attacks. This has even spilled over off-site to the extent of threats being sent via private channels. That's not the sort of "accountability" we want here. The fact that you don't suggest "accountability" for upvotes speaks volumes for whether you actually want people to be accountable for voting or just the things you disagree with.
    – VLAZ
    2 hours ago
  • Are upvotes never abused? Are they never invalid? Your answer very definitely points to this being your position. Tightening the restrictions on downvotes and not even tackling anything remotely similar for upvotes is your attempt to crack down on just half the content feedback. If you really thought voting was a problem, you would have spared half a word on upvotes. You didn't. So, if you're now trying to come up with "Sure, we can extend the same for upvotes" (I see it way too often) - why wasn't that a concern before?
    – VLAZ
    2 hours ago
  • @Mr.Coz VLAZ already exposed part of the problem with your suggestion, and the multitude of links posted by rene continue to extend on why you are not going to see changes on the freedom and anonymity of voting any time soon. Maybe you received some downvotes and just took them personally like they're attacks. You need to reconsider those feelings and take the votes for their content rating nature, nothing more, nothing less.
    – E_net4
    2 hours ago
  • I'm not opposed to also adding credibility to upvotes, although I don't see them as damaging as downvotes. I don't understand how manipulating upvotes creates harm, but apparently there is. On the other hand, there are various levels of cognitive and learning abilities here just like everywhere else in the world. Excessive downvotes can prevent those with lesser abilities from using this site to learn and better themselves. Providing info about the vote (up or down) would be useful to help users grow.
    – Mr. Coz
    43 mins ago
  • 1
    The sites aren't user-centrict. They are content-centric. We vote on content and how useful it is for others, not on users and how well they are doing.
    – VLAZ
    35 mins ago
  • 1) upvotes can be harmful when they are granted on problematic answers, and on questions they might deceive potential answerers; 2) Receiving downvotes does not prevent anyone from perusing the contents of the site, although it does prevent them from continuing with negative contributions.
    – E_net4
    5 mins ago

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .