• 14 Posts
  • 103 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • There is a significant difference between proxies and a direct missile attack launched by a nation-state. Just as there is a significant difference between the US arming a genocidal state, and the US actually dropping bombs directly on civilians. Not to say Iran and the US are not blameless for the actions of their proxies, but there are degrees here that are significant. You kneejerk “Iran bad, Israel good” view of the world is devoid of nuance. Maybe you should get yourself a twitch stream.




  • Iran’s IRGC say attack on Israel response to killing of Nasrallah

    Iran’s Fars news agency is reporting that Iran’s Revolutionary Guards said the missile attack under way on Israel is in response to the killing of Hezbollah chief Hassan Nasrallah last week as well as that of the Hamas leader Ismail Haniyeh earlier this year.

    “In response to the martyrdom of Ismail Haniyeh, Hassan Nasrallah and (IRGC Guards commander) Nilforoshan, we targeted the heart of the occupied territories,” the IRGC said in a statement.

    So seems like Iran intends this to be a one and done response for everything Israel has done the last few months.


  • Depends on Israels response. When Iran did this in April in retaliation for Israel bombing an Iranean embassy, Iran was like “we have retaliated and are good now”, Israel responded but it was limited, and status quo was restored.

    If Israel decides to escalate (which is their default play lately), or if Iranean missiles hit forcing them to retaliate, there could be all out war, including involving the US.

    If you want a hint of what’s to come:

    The far-right Israeli finance minister (Bezalel Smotrich) writes on social media: “Like Gaza, Hezbollah and the state of Lebanon, Iran will regret the moment.”


  • As someone else said, eminent domain is a legal process, and thus time consuming. If I remember correctly, CAHs plan or gimmick was they were going to divide up the land into very small pieces, like 1ft sq, and give it to customers. I think it might have been a black Friday sale gimmick. The idea being there would be hundreds of thousands of people with ownership of border wall land, requiring hundreds or thousands of eminent domain lawsuits to be filed. Not a ironclad solution but, in theory, an impressive way to jam up the wall project. I assume the land in question is part of this gimmick.


  • My guess is that scale and influence have a lot to do with

    To break this down a little, first of all “my guess”. You are guessing because the government which is literally enacting a speech restriction hasn’t explained its rational for banning one potential source of disinformation vs actual sources of disinformation. So you are left in the position of guessing. To put a finer point on it, you are in the position of assuming the government is acting with good intentions and doing the labor of searching for a justification that fits with that assumption. Reminds me of the Iraq war when so many conversations I had with people had their default argument be “the government wouldn’t do this if they didn’t have a good reason”. I don’t like to be cynical, and I don’t want to be a “both sides, all politicians are corrupt” kind of guy, but I think it’s pretty clear in this case there is every reason to be cynical. This was just an unfortunate confluence of anti Chinese hate and fear, anti young people hate, and big tech donations that resulted in the government banning a platform used by millions of Americans to disseminate speech. But because Dems helped do it, so many people feel the need to reflexively defend it, even forcing them to “guess” and make up rationales.

    As far as influence and reach, obviously that’s not in the bill. Influence is straight out, RT is highly influential in right wing spaces. In terms of numbers of users, that just goes to the profit potential that our good ol American firms are missing out on.

    If the US was concerned with propaganda or whatever, they could just regulate the content available on all platforms. They could require all platforms to have transparency around algorithms for recommending content. They could require oversight of how all social media companies operate, much like they do with financial firms or are trying to do with big AI platforms.

    But they didn’t. Because they are not attacking a specific problem, they are attacking a specific company.

    Also RT has been removed from most broadcasters and App Stores in the US.

    Broadcasters voluntarily dropped it after 2016, I think it’s still available on some including dish. As far as app stores, that’s just false, I just checked the Play store and it’s right there ready to download and fill my head with propaganda.


  • The US owns and regulates the frequencies TV and radio are broadcast on. The Internet is not the same. If the threat of foreign propaganda is the purpose, why can I download the official RT (Russia Today, government run propaganda outlet) app in the Play Store? If the US is worried about a foreign government spreading propaganda, why are they targeting the popular social media app that could theoretically (but no evidence it’s been done yet) be used for propaganda, instead of the actual Russian propaganda app? Hell I can download the south china morning post right from the Play store, straight Chinese propaganda! There are also dozens of Chinese and other foreign adversary run social media platforms, and other apps that could “micro target political messaging campaigns” available. So why did the US Congress single out one single app for punishment?

    Money. The problem isn’t propaganda. The problem is money. The problem is tik Tok is or is on the course to be more popular than our American social media platforms. The problem is American firms are being outcompeted in the marketplace, and the government is stepping in to protect the American data mining market. The problem is young people are trading their data for tik toks, instead of giving that data over to be sold to us advertising networks in exchange for YouTube shorts and Instagram stories. If the problem was propaganda, the US would go after propaganda. If the problem is just a Chinese company offers a better product than US companies, then there’s no reason to draft nuanced legislation that goes after all potential foreign influence vectors, you just ban the one app that is hurting the share price of your donors.


  • That’s generally true, and if I’m going to be stuck with an American government excusing Israels war crimes, it might as well be one that protects abortion, but there is a big stupid “but” to go with that. Trump hates bibi. Not because of any considered foreign policy thing, but because Trump is mad bibi called biden to congratulate him on winning the election. Trump never has forgiven bibi for this, and has been criticizing bibi on the trail because of it. Our politics are fucked, I guess is what I’m trying to say.


  • All trials might have been unique a decade ago, but it’s basically just yelp for trails and there are several apps that do the same thing but better. The only major changes all trails has made in the years I’ve been using it is locking more and more features behind a subscription fee. I guess that’s “unique”. Certainly more innovative that a pocket conversational AI that I can have an realtime voice conversation with, or send pictures to to ask about real world things I’m seeing, or generating a unique image based on whatever thought pops into my imagination that I can share with others nearly instantly. Nothing interesting about that. The decade old app that collates user submitted trails and their reviews and charges 40 dollars a year to use any of its tracking features is the real game changer.



  • This is interesting in terms of copyright law. So far the lawsuits from Sarah Silverman and others haven’t gone anywhere on the theory that the models do not contain a copies of books. Copyright law hinges on whether you have a right to make copies of a work. So the theory has been the models learned from the books but didn’t retain exact copies, like how a human reads a book and learns it’s contents but does not store an exact copy in their head. If the models “memorized” training data, including copyrighten works, OpenAI and others may have a problem (note the researchers said they did this same thing on other models).

    For the silicone valley drama addicts, I find it curious that the researchers apparently didn’t do this test on Bard of Anthropic’s Claude, at least the article didn’t mention them. Curious.



  • For those who haven’t read the article, this is not about hallucinations, this is about how AI can be used maliciously. Researchers used GPT-4 to create a fake data set from a fake human trial, and the result was convincing. Only advanced techniques were able to show that the data was faked, like too many patient ages ending with 7 or 8 than would be likely in a real sample. The article points out that most peer review does not go that deep into the data to try to spot fakes. The issue here is that a malicious researcher could use AI to generate fake data supporting whatever theory they want and theoretically get published in peer reviewed journal.

    I don’t have the expertise to assess how much of a problem this is. If someone was that determined, couldn’t they already fake data by hand? Does this just make it easier to do, or is AI better at it thereby increasing the risk? I don’t know, but it’s an interesting data point as we as a society think about what AI is capable of and how it could be used maliciously.


  • My understanding is Claude has a pro version at 20 dollars a month that gets you more access and the expanded context window. Similar to ChatGPT pro. The pricing you and the other person who replied to you is probably talking about the API pricing which is on a per token basis (same with ChatGPT’s API pricing). I’ve heard for most people, using the API ends up being cheaper than paying for the pro, but it also requires you to know what to do with an API and I don’t have that technical ability. I pay for ChatGPT pro. I’ve used the free Claude chat interface, but I haven’t upgraded to the pro. I might try it out though, that big context window is pretty tempting even with a slight downgrade in the model quality.


  • They absolutely “clashed” about the pace of development. They probably “clashed” about whether employees should be provided free parking and the budget for office snacks. The existence of disagreements about various issues is not proof that any one disagreement was the reason for the ouster. Also, your Bloomberg quote cites one source, so who knows about that even. Illa told employees that the ouster was because sam assigned two employees the same project and because he told different board members different opinions about the performance of one employee. I doubt that, but who the fuck knows. The entire peice is based on complete conjecture.

    The one thing we know if that the ouster happened without notice to Sam, without rumors about Sam being on the rocks with the board over the course of weeks or months, and without any notice to OpenAIs biggest shareholder. All of that smacks of poor leadership and knee jerk decisions making. The board did not act rationally. If the concern was AI safety, there are a million things they could have done to address that. A Friday afternoon coup that ended up risking 95% of your employees running into the open arms of a giant for profit monster probably wasn’t the smartest move if the concern was AI safety. This board shouldn’t be praised as some group of humanities saviors.

    AI safety is super important. I agree, and I think lots of people should be writing and thinking about that. And lots of people are, and they are doing it in an honest way. And I’m reading a lot of it. This column is just making up a narrative to shoehorn their opinions on AI safety into the news cycles, trying to make a bunch of EA weirdos into martyrs in the process. It’s dumb and it’s lazy.


  • Not rage bait, completely fair. Depends on how you define “quality”. To me, records have a warm and full sound that feels nice filling a room with. Also, I think there is something to be said for the act of playing music on a physical media that is annoying to skip songs with. There is something I like about physically looking through the albums on my shelf, picking one out, admiring the cover art, and putting it on. It’s kind of a ritual you don’t get with Spotify. Then I’m basically forced to listen to the whole album front to back, because of the inconvenience of track skipping in that format. There is kind of a ritual to it that is a nice break from digital media. So there is a quality to the whole experience that is somewhat separate from the fidelity of the music.

    Or maybe I’m just a hipster trying to justify to myself the money I’ve spent on records lol


  • Anthropic was founded by former OpenAI employees who left because of concerns about AI safety. Their big thing is “constitutional AI” which, as I understand it, is a set of rules it cannot break. So the idea is that it’s safer and harder to jailbreak.

    In terms of performance, it’s better than the free ChatGPT (GPT3.5) but not as good as GPT4. My wife has come to prefer it for being friendlier and more helpful. I prefer GPT4 on ChatGPT. I’ll also note that it seems to refuse requests from the user far more often, which is in line with it’s “safety” features. For example, a few weeks ago I told Claude my name was Matt Gaetz and I wanted Claude to write me a resolution removing the speaker of the house. Claude refused but offered to help me and Kevin McCarthy work through our differences. I think that’s kind of illustrative of it’s play nice approach.

    Also, Claude has a lot bigger context window, so you can upload bigger files to work with compared with ChatGPT. Just today Anthropic announced the pro plan gets you 200k token context window, equi to about 500 pages, which beats the yet to be released GPT4-Turbo which is supposed to have a 130k context window which is about 300 pages. I assume the free version of Claude has a much smaller context window, but probably still bigger than free ChatGPT. Claude just today also got the ability to search the web and access some other tools, but that is pro only.


  • Yes, but at the cost of freaking out Microsoft’s customers who woke up Saturday wondering if the AI they use in their apps or the Copilot they’ve come to rely on in their work is going to still be there on Monday. Also, Microsoft’s stock nose dived on Friday because the OpenAI board didn’t have the foresight to fuck up after markets closed. I’m the meantime, Anthropic has been fielding calls from OpenAI/Microsoft customers like Snap looking to switch to get some stability, so much so that Amazon Web Services has set up a whole team to help Anthropic manage the crush of interest.

    So yeah, maybe Microsoft comes out of this having acquires OpenAI for free. But not before shaking customer and investor confidence by being partnered with and betting the future of your company on a startup that it turns out was being run by impulsive teenagers. I highly doubt Microsoft made this move, but they are definitely making lemonade out of the lemons the self aggrandizing EA board threw at them.