edited from talent to job

  • LouNeko@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Anti-Cheats. Train an AI on gameplay data (position, actions, round duration, K/D, etc.) of caught cheaters and usw that to flag new ones. No more Kernel level garbage, just raw gameplay data.

    • jimmycrackcrack@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      It’s also good since it’s low stakes. I mean I’d be furious if misidentified after I paid to use the game and but at the end of the day it’s only a game.

  • Tattorack@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Any body-breaking heavy labour. Emphasis on body-breaking; there’s nothing wrong with hard work, but there are certain people that believe hard work = leaving your body destroyed at 50.

    • jimmycrackcrack@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      Yeh I think people like this idea because of a kind of ironic poetic justice since it’s those guys who wanted to replace everyone else except themselves with AI, but if you think about how much you hated those uncaring bastards operating like robots just to extract an ounce of profit at whatever the human cost, imagine now actually being a robot. Also, if you ever had to deal with bullshit from those guys and resented having to grin and bear it even though you don’t think they’re particularly qualified and also know nothing about your job, imagine having to be “managed” by a fucking robot that tries to say patronising encouraging things because it’s learned the very best pattern of speech to get the behaviour it wants out of you. Admittedly at least some of the decision making might be a bit more rational, but then every now and then AI gets things totally out of wack in the strangest ways and you’ll have to just take those decisions, from a damn machine.

    • Numuruzero@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I get what you’re going for but I have a hard time imagining this as a good thing so long as companies are profit driven.

  • AA5B@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 days ago

    None. The current ones with internet content, reporting, and call centers are already making things worse. Just no.

    It can definitely be a useful tool though, as long as you understand its limitations. My kids school had them feed an outline to ChatGPT and correct the result. Excellent

    • consultants generate lots of reports that ai can help with
    • I find ai useful to summarize chat threads that are lower priority
    • a buddy of mine uses it as a first draft to summarize his teams statuses
    • I’m torn on code solutions. Sometimes it’s really nice but you can’t forward a link. More importantly the people who need it most are least likely to notice where it hallucinates. Boilerplate works a little better
  • Hackworth@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 days ago

    Illustrators. Actors. Animators. Writers. Editors. Directors. Let’s make art impossible to sell so we can get back to proper starving, errr… I mean… making art as a form of expression rather than commerce.

  • s08nlql9@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    i think i read some posts like hackernews that they already use AI as a therapist. I have good conversations with chatgpt when i asked for some personal advise. I haven’t tried talking to a real therapist yet but i can see AI being used for this purpose. The services may still be provided by big companies or we can host it ourselves but it could be cheaper (hopefully) compared to paying a real person.

    Don’t get me wrong, i’m not against real physicians in this field, but some people just can’t afford mental healthcare when they need it.

  • spicy pancake@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    Perhaps it’s not possible to fully replace all humans in the process, but harmful content filtering seems like something where taking the burden off humans could do more good than harm if implemented correctly (big caveat, I know.)

    Here’s an article detailing a few peoples’ experience with the job and just how traumatic it was for them to be exposed to graphic and distributing content on Facebook requiring moderator intervention.