Not a good look for Mastodon - what can be done to automate the removal of CSAM?

  • docrobot@lemmy.sdf.org
    link
    fedilink
    arrow-up
    15
    arrow-down
    3
    ·
    1 year ago

    I’m not actually going to read all that, but I’m going to take a few guesses that I’m quite sure are going to be correct.

    First, I don’t think Mastodon has a “massive child abuse material” problem at all. I think it has, at best, a “racy Japanese style cartoon drawing” problem or, at worst, an “AI generated smut meant to look underage” problem. I’m also quite sure there are monsters operating in the shadows, dogwhistling and hashtagging to each other to find like minded people to set up private exchanges (or instances) for actual CSAM. This is no different than any other platform on the Internet, Mastodon or not. This is no different than the golden age of IRC. This is no different from Tor. This is no different than the USENET and BBS days. People use computers for nefarious shit.

    All that having been said, I’m equally sure that this “research” claims that some algorithm has found “actual child porn” on Mastodon that has been verified by some “trusted third part(y|ies)” that may or may not be named. I’m also sure this “research” spends an inordinate amount of time pointing out the “shortcomings” of Mastodon (i.e. no built-in “features” that would allow corporations/governments to conduct what is essentially dragnet surveillance on traffic) and how this has to change “for the safety of the children.”

    How right was I?

    • JBloodthorn@kbin.social
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago

      The content in question is unfortunately something that has become very common in recent months: CSAM (child sexual abuse material), generally AI-generated.

      AI is now apparently generating entire children, abusing them, and uploading video of it.

      Or, they are counting “CSAM-like” images as CSAM.

      • docrobot@lemmy.sdf.org
        link
        fedilink
        arrow-up
        11
        arrow-down
        2
        ·
        1 year ago

        Of course they’re counting “CSAM-like” in the stats, otherwise they wouldn’t have any stats at all. In any case, they don’t really care about child abuse at all. They care about a platform existing that they haven’t been able to wrap their slimy tentacles around yet.

    • SheeEttin@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Halfway there. The PDF lists drawn 2D/3D, AI/ML generated 2D, and real-life CSAM. It does highlight the actual problem of young platforms with immature moderation tools not being able to deal with the sudden influx of objectional content.