Skip to content

Next

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
    • Help
    • Submit feedback
    • Contribute to GitLab
    • Switch to GitLab Next
  • Sign in / Register
P
Policies
  • Project
    • Project
    • Details
    • Activity
    • Releases
    • Cycle Analytics
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Charts
    • Locked Files
  • Issues 3
    • Issues 3
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Merge Requests 1
    • Merge Requests 1
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
    • Charts
  • Registry
    • Registry
  • Packages
    • Packages
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Graph
  • Charts
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
  • Minds
  • Policies
  • Issues
  • #3

Closed
Open
Opened 19 hours ago by trymeout@trymeout
  • Report abuse
  • New issue
Report abuse New issue

(feature) NSFW & Opinion Tags

Hello,

I would like to suggest two tags that you can have on your posts which can allow family-friendly users not have to see this and not infringe on those who want to see certain content.

-NSFW Tag NSFW is for porn and nudity. And make this very, very clear. Does cleavage count as NSFW? Does a girl in a bikini count as NSFW? Make this clear as mud. Heck rename the NSFW tag to "Adult" so it is clear it is a sexual content filter.

-Option Tag This is a tag for those who express a political opinion. So I can post a picture of a cabin and I can post a clown world meme. The clown world meme is a political meme, the cabin is not. So those who want to view political views can enable "Option" posts. Those who want a family friendly feed with no politics, disable viewing Opinion tags. That way your Grandma can enjoy minds and see cats and dogs while those who want political engagement can get their taste of content.

The reason for two filters is that I want political content, but not nudity and to censor political content as NSFW content make political content creators seems like there in the porn industry and there are not.

The one issue with this is idea of mine is not the decentralized jury, but 3 strikes and to have a channel permanently in NSFW or permanently in Opinion state. Because people make mistakes and I can forget to mark a political post as "Opinion" and if I make the mistake three times my channel will be permanently marked as an Opinion channel.

My suggestion to fix this is not to have your channel permanently marked as NSFW or Opinion forever after three strikes but have it marked NSFW or Opinion for maybe a month, 6 months, 12 months. Once X months passed the channel will be lifted from being set as a NSFW or Opinion channel and the 3 strikes will be removed.

Minds users can discuss this here... https://www.minds.com/newsfeed/983458736943038464

Please solve the reCAPTCHA

We want to be sure it is you, please confirm you are not a robot.

Edited 19 hours ago by trymeout

Related issues
0

    • trymeout @trymeout changed the description 19 hours ago

      changed the description

    • Errol Loewenthal
      Errol Loewenthal @valleylion · 6 hours ago

      The jury system could be implemented as follows:

      A user uploads content This content is marked by the user as NSFW, for kid's or defaulted to PG13 plus

      Content that is not marked NSFW will be referred to either staff or a community member to review.

      If the content is marked "for kid's" and is thought to be PG13 then the content is highlighted for jury assessment

      If content is defaulted PG13 plus but is thought to be NSFW then the content is highlighted for jury assessment

      This assessment is done without the content providers knowledge.

      Should the jury assessment be the content was incorrectly highlighted, the process stops.

      Should the jury assessment agree that the content was incorrectly marked, then an email is sent to the content provider notifying them of the assessment

      The content provider could then accept the assessment or request a review

      On a review request, the content provider must submit mitigating reasons for the review to be overturned

      The mitigating reasons and the content is sent to a second jury to assess

      Should they accept the mitigating reasons the process stops.

      Should they still agree with the initial highlighting and prior assessment, the content is marked as such and the content provider is notified.

      To ensure consistency, each person who volunteers to highlight content must undertake an assessment and they can earn tokens for this task.

      Tokens could be generated on accuracy of highlighting content.

      The more assessments that are overturned by the first jury will mean the person can't do the job and can be dropped.

      The more second jury assessments that agree with the highlighting the better the quality of person and the more tokens they can earn.

      This system could be self regulating and minds.com would only be the platform and not the enforcers.

      Edited by Errol Loewenthal 6 hours ago
    • You're only seeing other activity in the feed. To add a comment, switch to one of the following options.
    Please register or sign in to reply
    Assignee
    None
    Assign to
    None
    Epic
    None
    None
    Milestone
    None
    Assign milestone
    None
    Time tracking
    No estimate or time spent
    None
    Due date
    None
    0
    Labels
    None
    Assign labels
    • View project labels
    None
    Weight
    None
    Confidentiality
    Not confidential
    Lock issue
    Unlocked
    2
    2 participants
    user avatar
    Errol Loewenthal
    user avatar
    trymeout
    Reference: minds/terms-of-service#3