Australian Cyber Security professional

  • 1 Post
  • 51 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle




  • I don’t think they would do that unless Lemmy continues to grow to a point where it challenges Reddit. Then it becomes a technical issue. I don’t think they can do that. It was one thing for threads to do it, being designed with that in mind from day 1, but it’s completely different for Reddit to do it. There are so many features that just wouldn’t make the jump, and so much content that would need to be reworked.

    If they were going to do it, it would most likely be a clean break where you just can’t access old Reddit content on Lemmy, but all their new stuff would be accessible.

    I also just don’t see them giving away their content like that after cracking down on the API how they did.


  • I feel like the amount of training data required for these AIs serves as a pretty compelling argument as to why AI is clearly nowhere near human intelligence. It shouldn’t take thousands of human lifetimes of data to train an AI if it’s truly near human-level intelligence. In fact, I think it’s an argument for them not being intelligent whatsoever. With that much training data, everything that could be asked of them should be in the training data. And yet they still fail at any task not in their data.

    Put simply; a human needs less than 1 lifetime of training data to be more intelligent than AI. If it hasn’t already solved it, I don’t think throwing more training data/compute at the problem will solve this.











  • Being fooled by Twitter users is worse as they can link to reputable sources (that usually wouldn’t post clickbait/bad headings). There’s also little incentive for twitter users to not post misleading headlines, while (some) journalists/news sites are trying to build a reputation of reputability. Yes, it would be solved by clicking the article, but you shouldn’t have to click every article to make sure the poster isn’t lying about the content.