Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.
Block hash lists then? Something like a community driven hashlist for CSAM would work, of the majority of federated instances report it as that type then it would get added to the list. Instances could then choose what lists they wanted to block.
…instances could also show what lists they subscribe to so they users could see what sort of moderation they choose
This is kind of problematic… By creating a community driven hashlist that is freely shared, you’ve also kind of created an index of CSAM content that could easily be extrapolated for people actively looking to find/share that content.
Doesn’t anyone looking for that material already know what to look for?
Surely a list of hashes wouldn’t be that useful?
Super useful, it’s very similar to how magnet links for torrenting works.
A lot of other areas online make use of hashes as identifiers already too. If you search for a hash of a file you’ve downloaded, just the hash alone, there’s a very good chance you’ll get multiple results.
deleted by creator
only if they are crypto hashes (hash functions that back btc, ltc, other cryptos) as they are irreversible*
*i wont explain, use your internet in the pocket
So the standard approach to this is so-called “perceptual hashing.” Effectively, using cryptographic hashes (sha256, etc.) doesn’t really work well in this case. Given a piece of illegal content, that content is likely to still be just as illegal with a single pixel changed – however, it’ll have a completely different cryptographic hash. So instead, a hash function that determines how “similar-looking” two images are, ignoring things like dimensions, color palette, JPEG compression artifacts, etc. This is obviously way fuzzier, and is prone to both false positives and negatives.
Because all this is inherently kinda fuzzy, the exact database of hashes is usually “secret sauce” if you will. If it were public, it would be super easy to circumvent. As an example, given an illegal image:
As a result even “public” databases are distributed with NDAs etc. This obviously does not jive well with an open source, federated network like Mastodon, and I have my doubts as to how willing the relevant agencies would be to give their databases to every rando with $5 to spin up a Pleroma instance on a VPS. A public DB might help in some cases, but unfortunately more illegal content is produced every day, and so it would be extremely hard to keep up with the bad actors.