I want to share some thoughts that I had recently about YouTube spam comments. We all know these early bots in the YouTube comment section, with those “misleading” profile pictures and obvious bot like comments. Those comments are often either random about any topic or copied from other users.

OK, why am I telling you that? Well, I think these bots are there to be recognized as bots. Their job is to be seen as a bot and be deleted and ignored. In that case everyone feels safe, thinking all bots are now deleted. But in reality there are more sophisticated bots under us. So the easy bots job is to get delete and basically mislead us, so we don’t think that any is left, because they are deleted.

What do you think? Sounds plausible, doesn’t it? Or do I have paranoia? :D

  • Kwakigra@beehaw.org
    link
    fedilink
    arrow-up
    6
    ·
    3 hours ago

    They could be having that effect. Scams that look obvious are to attract people who fall for obvious scams, such as people with dementia. They are designed to be transparent to most people because they don’t need anybody clicking that has the faculties to know better than to fall for the rest of the scam.

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    4 hours ago

    So, just for show? It sounds possible but implausible IMO; I don’t think YouTube cares about that cesspool of its own comments, not even enough to set a smoke screen up.

  • megopie@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 hours ago

    Maybe some of the obviousness is a sort of camouflage in that if it looks like a fishing scheme, people at YouTube won’t look any deeper. I think the actual goal of the bots is to manipulate the algorithm. Like, most of the time, the obvious bots just get ignored, especially on videos from bigger creators, no reason to put effort in to making them believable.

    Like, maybe they comment on video A to show “engagement” with that content, then they go and comment on video B. Fool the algorithm into associating people who engage with video A as the same kind of audience who would engage with Video B. Thus getting the algorithm to recommend video B more often to viewers of Video A. For something like that you wouldn’t need the bots to look real to other commenters, and having them seem like innocuous fishing scam bots might reduce the scrutiny on their activity.

    I could see a lot of different reasons to do that. Could be as simple as some shady “Viral marketing consultancies” trying to boost a client’s channel in the algorithm. Could also be something more comprehensive and nefarious, like trying to manipulate social discourse by steering whole demographics towards certain topics or even away from specific topics. I do wonder how much the algorithm could be nudged by an organized bot comment spam ring.

    I don’t think you sound paranoid at all, at least not compared to me. Bots are everywhere on social sights and there is a well documented history of different groups using various tactics and strategies to hide the bots or distract from what the bots are doing.

    • SteevyT@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      I wonder what list that ear piercing high Ab with the trumpet 3 inches from the phone when I was having an exceptionally bad spam day put me on.

  • lattrommi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    this applies possibly to phone calls, text messages, email, comments on forums and sites like youtube and many other things.

    check: does user respond? if yes, user will engage. add to will engage list.

    check: how does user respond? delete or reply? if reply, add to repeat text/voice call list. if delete add to spam defender list.

    will engage list: continue to send. engagement is attention. they are acknowledging and thus may be able to attract their attention in some way for advertisers.

    text/voice list: same as engage list but also opens lines of communication. chance to upsell. chance to phish with support scam.

    spam defender list: continue using default spam tactics. add higher level phishing techniques. consider adding to spearphishing list.

    spearphishing list: has spam experience and can use computer/phone. possible tech worker. gather more information. attempt to infiltrate. cross reference username with leak db’s. do they reuse their passwords?

    all of the above: collect ai training data.

    i don’t know how true any of this is, it’s simply how i imagine some of it works. i might be paranoid. how you react is part of how you get classified into a list or group.

    • thingsiplay@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      To me the comments are one of the most interesting things on YouTube. Either on Gaming, Linux or in example on funny video content, with lot of funny comments. I actually use FreeTube client to watch videos anonymously, but go to Firefox and login to YouTube specifically to comment and interact with other users.

      • Rob299 - she/her@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 hours ago

        If you are concerned of the possibility of a bot being among other users that has no been deleted what I would suggest is. Get into less popular and less documented videos topics and their comment sections.

        Idk what you are searching for but the more specific a video is to a genre or video topic will certainly throw off an ai chat bot. They’ll eventually say the wrong thing. What you do then is see if they bother correcting themselves or do they keep using the same answer in responses. You’l know its a bot because in more niche topic a real person would be more dedicated into saying the right things.

        Linux is well documented but how documented is Palemoon browser compared to Firefox for example. The more specific you get the easier it will be to know if it’s an ai bot. If you talk about everyday topics the ai is always being trained on user generated content. It gets harder to tell.