Skip to content

Policy for managing and removing illegal content (e.g. CSAM) from the Betanet network #40

@PyQio

Description

@PyQio

Spec section: none — the document does not address content removal/moderation.

Problem description:
Currently, the Betanet 1.1 specification does not indicate how the network should behave in the event of the publication of content that is illegal under the law of one or more jurisdictions, such as child sexual abuse material (CSAM), which is widely accepted as unlawful.

The management of seriously illegal content is essential to protect users, avoid legal liability for operators, and make the system usable in regulated contexts.

Addressing this issue at the network design stage is critical to ensure that the architecture, procedures, and governance are designed with these challenges in mind, avoiding the need to introduce improvised or incompatible solutions in the future.

Points to clarify:

  • Does Betanet intend to adopt a model similar to Tor, in which each publisher is responsible for their own content and the network is neutral?
  • Or does it intend to provide a governance mechanism to remove/block services or aliases?
  • If so, how would rapid intervention be guaranteed in serious cases, and how would abuse of power or political censorship be avoided?
  • Is there a mechanism to prevent the propagation of or access to content that is prohibited in certain jurisdictions, while maintaining the decentralised nature of the system?

Example scenario:

  1. An alias betanet://example points to a service that distributes CSAM (on or off the network).
  2. A legal authority in a state or country requests immediate removal.
  3. Currently, the specification does not describe any procedure for:
    a. Delisting the alias from the ledger.
    b. Reporting and blocking the distribution of content in the mesh.
    c. Informing participating nodes.

Proposal:

  • Explicitly define a content policy.
  • If a ‘neutral network’ approach is chosen, declare it and document the legal responsibility of operators.
  • If a moderation approach is chosen, introduce a secure technical mechanism to:
    a. Report illegal content.
    b. Block the alias or CID involved.
    c. Ensure auditability and protection against abuse.

Note: in Tor and IPFS, this is an ongoing debate; in some countries, attempts have been made to prosecute relays.


Open Questions

  1. Rapid response in serious emergencies

    • Is there (or is there a plan to introduce) a mechanism for taking action within hours/minutes in the event of seriously illegal content that might harm others, avoiding the need to wait for the quorum and activation times (>30 days) provided for in §10?
    • In the absence of such a mechanism, how is the risk mitigated that the network will be used for illegal content without the technical possibility of immediate intervention?
  2. Prevention of abuse and arbitrary censorship

    • If a rapid removal system is introduced, what technical and procedural safeguards will prevent it from being used to censor legal but politically inconvenient content?
    • Is there an audit and transparency system for all removal decisions?
  3. Immutability of content and its management

    • Considering that content distributed via L3 (libp2p/Bitswap) is immutable and identified by hash (CID), what strategy will be adopted:
      a. Removal only at the alias level (delisting), leaving the content available via direct CID?
      b. Introduction of a CID blacklist shared between nodes?
      c. Voluntary node cache purge mechanisms?
    • In all cases, how do you intend to manage the propagation and rapid updating of such blocking information across the network?

Activity

jacksteussie

jacksteussie commented on Aug 13, 2025

@jacksteussie

i think the point of a second internet would include a neutral, uncensored one like Tor's policy, but the fact that an idea like this wasn't even talked about by the repo owner leads me to believe that he literally has no idea what he's talking about. My opinion is that rather than gaslighting people into how smart he is and how dumb we are through his youtube videos, he should admit that he's not sure and be open to more collaboration so that ideas like these can be brought up. Unfortunately he is completely unresponsive to anything in this repo whatsoever it seems, so this idea may be falling on deaf ears. I applaud you for the thought though.

DistressedBrain

DistressedBrain commented on Aug 14, 2025

@DistressedBrain

All what i can suggest to stop those things, is to do a content review depending on what it is.

For text and comments a filter for it.
For pics and co. an AI that scans and decides if violates any policy, and that can be reviewed by a moderator when a harmeless content is flagged. Similar to copyrighted content on Youtube, if you understand what i mean.

But nontheless, is difficult without somewhat of survailance of users

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @PyQio@jacksteussie@DistressedBrain

        Issue actions

          Policy for managing and removing illegal content (e.g. CSAM) from the Betanet network · Issue #40 · ravendevteam/betanet